好的,我试试,感谢
| |
Sun.Zhu
|
|
邮箱:17626017...@163.com
|
Signature is customized by Netease Mail Master
在2020年05月04日 11:22,Jark Wu 写道:
看起来是一个已经修复的 bug (FLINK-16108)。
你可以用正在 RC 的 release-1.10.1 再试下吗?
https://dist.apache.org/repos/dist/dev/flink/flink-1.10.1-rc2/
Best,
Jark
On Mon, 4 May 2020 at
看起来是一个已经修复的 bug (FLINK-16108)。
你可以用正在 RC 的 release-1.10.1 再试下吗?
https://dist.apache.org/repos/dist/dev/flink/flink-1.10.1-rc2/
Best,
Jark
On Mon, 4 May 2020 at 01:01, 祝尚 <17626017...@163.com> wrote:
> 参考jark老师博客里的demo,写了个table api/sql的程序,在table转appendStream时报错
> flink版本1.10
> 代码如下:
> public
https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/mem_setup.html
Thank you~
Xintong Song
On Fri, May 1, 2020 at 8:35 AM shao.hongxiao <17611022...@163.com> wrote:
> 你好,宋
> Please refer to this document [1] for more details
> 能发一下具体链接吗,我也发现flink ui上显示的内存参数不太对,我想仔细看一下相关说明
>
> 谢谢啦
Following up on this,
I tried tweaking the Jdbc Sink as Timo suggested and was successful in it.
Basically I added a member *long maxWatermarkSeen *in JDBCOutputFormat,
so whenever a new record is added to the batch it updates the
*maxWatermarkSeen* for this subtask with
Thanks for your replay.
But as I known, if the time attribute will be retained and the time
attribute field of both streams is selected in the result after joining,
who is the final time attribute variable?
Benchao Li 于2020年4月30日周四 下午8:25写道:
> Hi lec,
>
> AFAIK, time attribute will be
yes, exactly; I want to rule out that (somehow) HDFS is the problem.
I couldn't reproduce the issue locally myself so far.
On 01/05/2020 22:31, Hailu, Andreas wrote:
Hi Chesnay, yes – they were created using Flink 1.9.1 as we’ve only
just started to archive them in the past couple weeks.
参考jark老师博客里的demo,写了个table api/sql的程序,在table转appendStream时报错
flink版本1.10
代码如下:
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
//以后版本会将old planner移除
EnvironmentSettings settings =
Hi Congxian,
Thank you so much for your response, that really helps!
>From your experience, how long does it take for Flink to redistribute
terabytes of state data on node addition / node failure.
Thanks!
On Sun, May 3, 2020 at 6:56 PM Congxian Qiu wrote:
> Hi
>
> 1. From my experience, Flink
Hi
按理说 1.10 中在 flink-conf 中配置的应该是可以的,具体的 issue 可以参考[1] 请问你用的是啥版本呢?
[1] https://issues.apache.org/jira/browse/FLINK-14788
Best,
Congxian
zhisheng 于2020年4月30日周四 下午6:51写道:
> 这个参数好像可以作业里面单独设置,可以试试看
>
> env.getCheckpointConfig().setTolerableCheckpointFailureNumber();
>
> 蒋佳成(Jiacheng Jiang)
Hi
>From the given fig, seems that the end-to-end duration of the two failed
checkpoint is small(it is not timeout due to some reason), could you please
check why did they fail?
Maybe you can find something in jm log such as "Decline checkpoint {} by
task {} of job {} at {}."
then you can go to
Hi
1. From my experience, Flink can support such big state, you can set
appropriate parallelism for the stateful operator. for RocksDB you may need
to care about the disk performance.
2. Inside Flink, the state is separated by key-group, each
task/parallelism contains multiple key-groups. Flink
Dear community,
happy to share - a brief - community update this week with an update on
Flink 1.10.1, our application to Google Season of Docs 2020, a discussion
to support Hadoop 3, a recap of Flink Forward Virtual 2020 and a bit more.
Flink Development
==
* [releases] Yu has
12 matches
Mail list logo