Sweet-huang-main commented on issue #8071:
URL: https://github.com/apache/hudi/issues/8071#issuecomment-2475840606

   @DavidZ1 Hi, have you solved the problem of stream_write operator? I have 
the same question as yours. The version and parameters  are as follow:
   (1)Version: 
   Flink 1.17.1 
   Hudi 0.15.0
   Kafka 2.0.1
   (2)parameters
   Flink parallelism: 30,
   Flink solts: 2,
   Flink taskmanager.process.size: 8192m。
   (3) problem
   there is 1000,000,000 records per day from Kafka. the performance of 
stream_write operator is down when Flink runs about 1.5 hours. 
   I guess the Hudi is difficult to solve the problem of writing disk when 
there are large datas.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to