Hi,

如同[1]里面说的,对于csv和json,你还需要配置rolling相关参数,因为它们是可以不在checkpoint强行rolling的。

NOTE: For row formats (csv, json), you can set the parameter
sink.rolling-policy.file-size or sink.rolling-policy.rollover-interval in
the connector properties and parameter execution.checkpointing.interval in
flink-conf.yaml together if you don’t want to wait a long period before
observe the data exists in file system. For other formats (avro, orc), you
can just set parameter execution.checkpointing.interval in flink-conf.yaml.

所以如果你想通过时间来rolling,你还需要配sink.rolling-policy.rollover-interval和sink.rolling-policy.check-interval

[1]
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/filesystem.html#rolling-policy

Best,
Jingsong

On Fri, Jul 17, 2020 at 4:25 PM kcz <[email protected]> wrote:

> 代码引用
>
> https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/filesystem.html#full-example
> 将parquet换成了json之后,chk成功,但是文件状态一直处于in-progress状态,我应该如何让它成功呢?
> parquet目前是已经success了。



-- 
Best, Jingsong Lee

回复