trushev commented on PR #7626:
URL: https://github.com/apache/hudi/pull/7626#issuecomment-1396468460

   @TengHuo 
   I tried the following workload with MOR table, 2000 partitions and 
compaction (checkpoint here triggers compaction)
   ```java
   public class TestPartitionsWorkloadWithCompaction extends 
TestWriteMergeOnReadWithCompact {
     @Test
     public void write2000partitions() throws Exception {
       int partitionCount = 2000; // aka rowCount
       List<RowData> oneRowPerPartitionData = IntStream.range(0, 
partitionCount).mapToObj(counter -> TestData.insertRow(
           StringData.fromString("id" + counter),
           StringData.fromString("Name"),
           0,
           TimestampData.fromEpochMillis(counter),
           StringData.fromString("par" + counter))
       ).collect(Collectors.toList());
       conf.setDouble(FlinkOptions.WRITE_BATCH_SIZE, 0.001); // 1024 bytes
       conf.setDouble(FlinkOptions.WRITE_TASK_MAX_SIZE, 101.001); // 101.001MB 
- 1024 bytes
       conf.setInteger(FlinkOptions.WRITE_MERGE_MAX_MEMORY, 1); // 1024 bytes
       preparePipeline(conf)
           .consume(oneRowPerPartitionData)
           .assertNextEvent()
           .checkpoint(1)
           .checkpointComplete(1)
           .end();
     }
   }
   ```
   I guess your problem is fixed by this PR
   
   <img width="714" alt="Снимок экрана 2023-01-19 в 12 39 46" 
src="https://user-images.githubusercontent.com/42293632/213364715-c0a4c125-7415-4cab-8d94-6916ba85172e.png";>
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to