13083718081 opened a new issue, #10333:
URL: https://github.com/apache/hudi/issues/10333

   How to correctly configure the single-table multi-write mode. Currently, 
configured according to the official website's method, the task keeps 
restarting when generating a compact plan.
   
   **Concurrency control configuration in Flink jobs:**
    ` 'hoodie.write.concurrency.mode'='optimistic_concurrency_control',
     'hoodie.cleaner.policy.failed.writes'='LAZY',
     
'hoodie.write.lock.provider'='org.apache.hudi.client.transaction.lock.ZookeeperBasedLockProvider',
     'hoodie.write.lock.zookeeper.url'='192.168.1.10',
     'hoodie.write.lock.zookeeper.port'='2181',
     'hoodie.write.lock.zookeeper.lock_key'='user_test_1',
     'hoodie.write.lock.zookeeper.base_path'='/test'`
   
   **Process Description:**
   Writing data into different partitions is fine, but when the number of 
compaction.delta_commits reaches the set value, an error is reported when 
generating the compaction plan.
   
   **Error message:**
   `Caused by: org.apache.hudi.exception.HoodieWriteConflictException: 
java.util.ConcurrentModificationException: Cannot resolve conflicts for 
overlapping writes
        at 
org.apache.hudi.client.transaction.SimpleConcurrentFileWritesConflictResolutionStrategy.resolveConflict(SimpleConcurrentFileWritesConflictResolutionStrategy.java:111)
        at 
org.apache.hudi.client.utils.TransactionUtils.lambda$resolveWriteConflictIfAny$0(TransactionUtils.java:89)
        at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384)
        at 
java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
        at 
java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
        at 
java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
        at 
org.apache.hudi.client.utils.TransactionUtils.resolveWriteConflictIfAny(TransactionUtils.java:83)
        at 
org.apache.hudi.client.BaseHoodieClient.resolveWriteConflict(BaseHoodieClient.java:202)
        at 
org.apache.hudi.client.BaseHoodieWriteClient.preCommit(BaseHoodieWriteClient.java:346)
        at 
org.apache.hudi.client.BaseHoodieWriteClient.commitStats(BaseHoodieWriteClient.java:232)
        at 
org.apache.hudi.client.HoodieFlinkWriteClient.commit(HoodieFlinkWriteClient.java:112)
        at 
org.apache.hudi.client.HoodieFlinkWriteClient.commit(HoodieFlinkWriteClient.java:75)
        at 
org.apache.hudi.client.BaseHoodieWriteClient.commit(BaseHoodieWriteClient.java:201)
        at 
org.apache.hudi.sink.StreamWriteOperatorCoordinator.doCommit(StreamWriteOperatorCoordinator.java:564)
        at 
org.apache.hudi.sink.StreamWriteOperatorCoordinator.commitInstant(StreamWriteOperatorCoordinator.java:540)
        at 
org.apache.hudi.sink.StreamWriteOperatorCoordinator.commitInstant(StreamWriteOperatorCoordinator.java:509)
        at 
org.apache.hudi.sink.StreamWriteOperatorCoordinator.lambda$initInstant$6(StreamWriteOperatorCoordinator.java:419)
        at 
org.apache.hudi.sink.utils.NonThrownExecutor.lambda$wrapAction$0(NonThrownExecutor.java:130)
        ... 3 more`
   
   Please take a look to see where the problem is, thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to