A-little-bit-of-data opened a new issue, #6251:
URL: https://github.com/apache/paimon/issues/6251

   ### Search before asking
   
   - [x] I searched in the [issues](https://github.com/apache/paimon/issues) 
and found nothing similar.
   
   
   ### Paimon version
   
   1.1.1
   
   ### Compute Engine
   
   flink 1.20.1
   
   ### Minimal reproduce step
   
   CREATE TABLE t_changelog_input (
         age BIGINT,
         money BIGINT,
         name STRING,
         PRIMARY KEY (name) NOT ENFORCED
   )WITH (
   "bucket" = "4",
        'file.compression' = 'snappy',
       'merge-engine' = 'deduplicate',
       'changelog-producer' = 'input',
       "sink.parallelism" = "4"
   );
   
   The table is created as above. When a large amount of data is written or a 
large amount of data is changed, the following error will appear. However, when 
sink.parallelism is 1, it will not appear. My data is stored on S3 and uses 
hivemetastore 3.1.2.Is there something wrong with my usage? Is there any way to 
increase the write parallelism?
   
   <img width="1391" height="237" alt="Image" 
src="https://github.com/user-attachments/assets/da2aea69-64a3-4b7e-9d9c-2c7262897498";
 />
   
   ### What doesn't meet your expectations?
   
   I hope that even when the data is stored on S3, which does not have atomic 
storage, it can still support multiple concurrent writes.
   
   ### Anything else?
   
   _No response_
   
   ### Are you willing to submit a PR?
   
   - [ ] I'm willing to submit a PR!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to