viverlxl opened a new issue, #7162:
URL: https://github.com/apache/hudi/issues/7162

   **Describe the problem you faced**
   
   when i try ingestion  data from kafka to hudi , use <HoodieFlinkStreamer> 
class, but as i start then main function,  hudiWriteClient create many rollback 
file in local dir
   
   <img width="616" alt="image" 
src="https://user-images.githubusercontent.com/35752202/200540750-8bbf8766-9898-4645-a8de-b3e09e499c60.png";>
   
   debug, i find will execute AbstractStreamWriteFunction.initializeState many 
times, in my opinion each operator only execute  <initializeState>  function 
one times  when then job start
   <img width="760" alt="image" 
src="https://user-images.githubusercontent.com/35752202/200541176-4a679d0e-c0e5-4911-acf8-11e56daee299.png";>
   
   i don`t know what cause this  or  initializeState will execute many times 
   
   `
   public class HoodieFlinkStreamer {
       public static void main(String[] argsd) throws Exception {
           String[] args = new String[]{"--kafka-topic", "xxxx", 
"--kafka-group-id", "xxx", "--kafka-bootstrap-servers", "xxx", "--table-type", 
"MERGE_ON_READ", "--target-base-path", "file:///xxxx", "--target-table", "xxx", 
"--source-avro-schema", "xxxx"};
   
           StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
   
           final FlinkStreamerConfig cfg = new FlinkStreamerConfig();
           JCommander cmd = new JCommander(cfg, null, args);
           if (cfg.help || args.length == 0) {
               cmd.usage();
               System.exit(1);
           }
           env.enableCheckpointing(cfg.checkpointInterval);
           env.getConfig().setGlobalJobParameters(cfg);
           // We use checkpoint to trigger write operation, including instant 
generating and committing,
           // There can only be one checkpoint at one time.
           env.getCheckpointConfig().setMaxConcurrentCheckpoints(1);
   
           env.setStateBackend(cfg.stateBackend);
           if (cfg.flinkCheckPointPath != null) {
               
env.getCheckpointConfig().setCheckpointStorage(cfg.flinkCheckPointPath);
           }
   
           TypedProperties kafkaProps = 
DFSPropertiesConfiguration.getGlobalProps();
           kafkaProps.putAll(StreamerUtil.appendKafkaProps(cfg));
   
           Configuration conf = FlinkStreamerConfig.toFlinkConfig(cfg);
           conf.setInteger(FlinkOptions.WRITE_TASKS, 1);
           conf.setInteger(FlinkOptions.INDEX_BOOTSTRAP_TASKS, 1);
           conf.setInteger(FlinkOptions.BUCKET_ASSIGN_TASKS, 1);
           // Read from kafka source
           RowType rowType =
                   (RowType) 
AvroSchemaConverter.convertToDataType(StreamerUtil.getSourceSchema(conf))
                           .getLogicalType();
   
           long ckpTimeout = env.getCheckpointConfig().getCheckpointTimeout();
           conf.setLong(FlinkOptions.WRITE_COMMIT_ACK_TIMEOUT, ckpTimeout);
   
           DataStream<RowData> dataStream = env.addSource(new 
FlinkKafkaConsumer<>(
                           cfg.kafkaTopic,
                           new JsonRowDataDeserializationSchema(
                                   rowType,
                                   InternalTypeInfo.of(rowType),
                                   false,
                                   true,
                                   TimestampFormat.ISO_8601
                           ), kafkaProps))
                   .name("kafka_source")
                   .uid("uid_kafka_source");
   
           if (cfg.transformerClassNames != null && 
!cfg.transformerClassNames.isEmpty()) {
               Option<Transformer> transformer = 
StreamerUtil.createTransformer(cfg.transformerClassNames);
               if (transformer.isPresent()) {
                   dataStream = transformer.get().apply(dataStream);
               }
           }
           OptionsInference.setupSinkTasks(conf, env.getParallelism());
           DataStream<HoodieRecord> hoodieRecordDataStream = 
Pipelines.bootstrap(conf, rowType, dataStream);
           DataStream<Object> pipeline = Pipelines.hoodieStreamWrite(conf, 
hoodieRecordDataStream);
           if (OptionsResolver.needsAsyncCompaction(conf)) {
               Pipelines.compact(conf, pipeline);
           } else {
               Pipelines.clean(conf, pipeline);
           }
   
           env.execute(cfg.targetTableName);
       }
   }
   `
   **Environment Description**
   
   1. flink version 1.13.3
   2. hudi 0.13
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to