Pandas886 opened a new issue, #2956:
URL: https://github.com/apache/incubator-paimon/issues/2956

   ### Search before asking
   
   - [X] I searched in the 
[issues](https://github.com/apache/incubator-paimon/issues) and found nothing 
similar.
   
   
   ### Paimon version
   
   0.7
   
   ### Compute Engine
   
   flink-1.17.2
   
   ### Minimal reproduce step
   
   ```
   CREATE TABLE  if not EXISTS use_be_hours_4 (
       user_id BIGINT,
       item_id BIGINT,
       behavior STRING,
       dt STRING,
       hh STRING,
       PRIMARY KEY (user_id) NOT ENFORCED
   ) 
   with (
     'manifest.format'='orc',
     'changelog-producer' = 'lookup',
     'incremental-between-scan-mode'='changelog',
     'changelog-producer.row-deduplicate'='true'
     
   );
   
   
   INSERT INTO use_be_hours_4
   VALUES
       (1, 1, 'watch', '2022-01-01', '10'),
       (2, 2, 'like', '2022-01-01', '10'),
        (0, 0, 'watch', '2022-01-01', '10'); 
   
   -- ERROR HAPPEN AS BELOW     
   delete from use_be_hours_4 where user_id =2;
   ```
   
   ERROR MSG:
   ```
   Caused by: java.lang.RuntimeException: java.lang.IllegalArgumentException: 
Can't extract bucket from row in dynamic bucket mode, you should use 
'TableWrite.write(InternalRow row, int bucket)' method.
        at org.apache.paimon.table.TableUtils.deleteWhere(TableUtils.java:74)
        at 
org.apache.paimon.flink.sink.SupportsRowLevelOperationFlinkTableSink.executeDeletion(SupportsRowLevelOperationFlinkTableSink.java:175)
        at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:889)
        at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:874)
        at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:991)
        at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:765)
        at 
org.dinky.executor.DefaultTableEnvironment.executeSql(DefaultTableEnvironment.java:300)
        at org.dinky.executor.Executor.executeSql(Executor.java:208)
        at org.dinky.job.builder.JobDDLBuilder.run(JobDDLBuilder.java:47)
        at org.dinky.job.JobManager.executeSql(JobManager.java:339)
        ... 136 more
   Caused by: java.lang.IllegalArgumentException: Can't extract bucket from row 
in dynamic bucket mode, you should use 'TableWrite.write(InternalRow row, int 
bucket)' method.
        at 
org.apache.paimon.table.sink.DynamicBucketRowKeyExtractor.bucket(DynamicBucketRowKeyExtractor.java:44)
        at 
org.apache.paimon.table.sink.TableWriteImpl.toSinkRecord(TableWriteImpl.java:148)
        at 
org.apache.paimon.table.sink.TableWriteImpl.writeAndReturn(TableWriteImpl.java:125)
        at 
org.apache.paimon.table.sink.TableWriteImpl.write(TableWriteImpl.java:116)
        at org.apache.paimon.table.TableUtils.deleteWhere(TableUtils.java:67)
        ... 145 more
   ```
   
   However, if I explicitly declare the number of buckets when creating the 
table as in the statement below, the issue does not occur.
   
   ```
   CREATE TABLE  if not EXISTS use_be_hours_4 (
       user_id BIGINT,
       item_id BIGINT,
       behavior STRING,
       dt STRING,
       hh STRING,
       PRIMARY KEY (user_id) NOT ENFORCED
   ) 
   with (
     'manifest.format'='orc',
     'changelog-producer' = 'lookup',
     'incremental-between-scan-mode'='changelog',
     'changelog-producer.row-deduplicate'='true',
     'bucket' = '1'
     
   );
   ```
   
   
   ### What doesn't meet your expectations?
   
   I can run successfully regardless of whether I explicitly declare the number 
of buckets when creating the table (as it defaults to one bucket if not 
declared).
   
   
   
   ### Anything else?
   
   _No response_
   
   ### Are you willing to submit a PR?
   
   - [ ] I'm willing to submit a PR!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to