是的

感谢反馈,文档里单位问题,分钟对应的是 min


> 在 2020年7月17日,17:26,kcz <[email protected]> 写道:
> 
> tks解决了,有一个小问题,文档写了30m,但是代码实际不支持m来代表分钟
> 
> 
> 
> 
> 
> 
> ------------------&nbsp;原始邮件&nbsp;------------------
> 发件人:                                                                          
>                                               "user-zh"                       
>                                                              
> <[email protected] <mailto:[email protected]>&gt;;
> 发送时间:&nbsp;2020年7月17日(星期五) 下午4:57
> 收件人:&nbsp;"user-zh"<[email protected] 
> <mailto:[email protected]>&gt;;
> 
> 主题:&nbsp;Re: flink-1.11 ddl 写入json 格式数据到hdfs问题
> 
> 
> 
> Hi,
> 
> 如同[1]里面说的,对于csv和json,你还需要配置rolling相关参数,因为它们是可以不在checkpoint强行rolling的。
> 
> NOTE: For row formats (csv, json), you can set the parameter
> sink.rolling-policy.file-size or sink.rolling-policy.rollover-interval in
> the connector properties and parameter execution.checkpointing.interval in
> flink-conf.yaml together if you don’t want to wait a long period before
> observe the data exists in file system. For other formats (avro, orc), you
> can just set parameter execution.checkpointing.interval in flink-conf.yaml.
> 
> 所以如果你想通过时间来rolling,你还需要配sink.rolling-policy.rollover-interval和sink.rolling-policy.check-interval
> 
> [1]
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/filesystem.html#rolling-policy
> 
> Best,
> Jingsong
> 
> On Fri, Jul 17, 2020 at 4:25 PM kcz <[email protected]&gt; wrote:
> 
> &gt; 代码引用
> &gt;
> &gt; 
> https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/filesystem.html#full-example
>  
> <https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/filesystem.html#full-example>
> &gt; 将parquet换成了json之后,chk成功,但是文件状态一直处于in-progress状态,我应该如何让它成功呢?
> &gt; parquet目前是已经success了。
> 
> 
> 
> -- 
> Best, Jingsong Lee

回复