Hi, Jingsong Lee

     Thanks for taking the time to respond to the email, I will try following 
your suggestion.



Best,
Yang



在 2020年10月19日 11:56,Jingsong Li<jingsongl...@gmail.com> 写道:


Hi, Yang,


"SUCCESSFUL_JOB_OUTPUT_DIR_MARKER" does not work in StreamingFileSink.



You can take a look to partition commit feature [1], 


[1]https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/filesystem.html#partition-commit


Best,
Jingsong Lee


On Thu, Oct 15, 2020 at 3:11 PM highfei2011 <highfei2...@outlook.com> wrote:

Hi, everyone!
      Currently experiencing a problem with the bucketing policy sink to hdfs 
using BucketAssigner of Streaming File Sink after consuming Kafka data with 
FLink -1.11.2, the _SUCCESS tag file is not generated by default.
      I have added the following to the configuration 


val hadoopConf = new Configuration()
hadoopConf.set(FileOutputCommitter.SUCCESSFUL_JOB_OUTPUT_DIR_MARKER, "true")    


But there is still no _SUCCESS file in the output directory, so why not support 
generating _SUCCESS files?


Thank you.




Best,
Yang




-- 

Best, Jingsong Lee
Hi, Yang,


"SUCCESSFUL_JOB_OUTPUT_DIR_MARKER" does not work in StreamingFileSink.



You can take a look to partition commit feature [1], 


[1]https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/filesystem.html#partition-commit


Best,
Jingsong Lee


On Thu, Oct 15, 2020 at 3:11 PM highfei2011 <highfei2...@outlook.com> wrote:

Hi, everyone!
      Currently experiencing a problem with the bucketing policy sink to hdfs 
using BucketAssigner of Streaming File Sink after consuming Kafka data with 
FLink -1.11.2, the _SUCCESS tag file is not generated by default.
      I have added the following to the configuration 


val hadoopConf = new Configuration()
hadoopConf.set(FileOutputCommitter.SUCCESSFUL_JOB_OUTPUT_DIR_MARKER, "true")    


But there is still no _SUCCESS file in the output directory, so why not support 
generating _SUCCESS files?


Thank you.




Best,
Yang




-- 

Best, Jingsong Lee
Hi, Yang,


"SUCCESSFUL_JOB_OUTPUT_DIR_MARKER" does not work in StreamingFileSink.



You can take a look to partition commit feature [1], 


[1]https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/filesystem.html#partition-commit


Best,
Jingsong Lee


On Thu, Oct 15, 2020 at 3:11 PM highfei2011 <highfei2...@outlook.com> wrote:

Hi, everyone!
      Currently experiencing a problem with the bucketing policy sink to hdfs 
using BucketAssigner of Streaming File Sink after consuming Kafka data with 
FLink -1.11.2, the _SUCCESS tag file is not generated by default.
      I have added the following to the configuration 


val hadoopConf = new Configuration()
hadoopConf.set(FileOutputCommitter.SUCCESSFUL_JOB_OUTPUT_DIR_MARKER, "true")    


But there is still no _SUCCESS file in the output directory, so why not support 
generating _SUCCESS files?


Thank you.




Best,
Yang




-- 

Best, Jingsong Lee

Reply via email to