LsomeYeah commented on PR #5817:
URL: https://github.com/apache/paimon/pull/5817#issuecomment-3691917166

   > @LsomeYeah Hi, it seems there is some issue, I have encountered on Paimon 
1.3.1, Flink 2.1.1
   > 
   > I have tested main case:
   > 
   > * I created Paimon table with REST catalog metadata properties and then 
wrote some data into it via Flink:
   > 
   > In result, an NPE was thrown on the Flink writer operator.
   > 
   > ```java
   > java.lang.RuntimeException: java.lang.RuntimeException: Fail to create 
table or get table: default.my_orders
   >    at 
org.apache.paimon.iceberg.IcebergRestMetadataCommitter.commitMetadata(IcebergRestMetadataCommitter.java:123)
   >    at 
org.apache.paimon.iceberg.IcebergCommitCallback.createMetadataWithBase(IcebergCommitCallback.java:666)
   >    at 
org.apache.paimon.iceberg.IcebergCommitCallback.createMetadata(IcebergCommitCallback.java:281)
   >    at 
org.apache.paimon.iceberg.IcebergCommitCallback.call(IcebergCommitCallback.java:229)
   >    at 
org.apache.paimon.operation.FileStoreCommitImpl.lambda$tryCommitOnce$16(FileStoreCommitImpl.java:1215)
   >    at java.base/java.util.ArrayList.forEach(Unknown Source)
   >    at 
org.apache.paimon.operation.FileStoreCommitImpl.tryCommitOnce(FileStoreCommitImpl.java:1213)
   >    at 
org.apache.paimon.operation.FileStoreCommitImpl.tryCommit(FileStoreCommitImpl.java:840)
   >    at 
org.apache.paimon.operation.FileStoreCommitImpl.commit(FileStoreCommitImpl.java:362)
   >    at 
org.apache.paimon.table.sink.TableCommitImpl.commitMultiple(TableCommitImpl.java:229)
   >    at 
org.apache.paimon.flink.sink.StoreCommitter.commit(StoreCommitter.java:111)
   >    at 
org.apache.paimon.flink.sink.CommitterOperator.commitUpToCheckpoint(CommitterOperator.java:215)
   >    at 
org.apache.paimon.flink.sink.CommitterOperator.notifyCheckpointComplete(CommitterOperator.java:192)
   >    at 
org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.notifyCheckpointComplete(StreamOperatorWrapper.java:104)
   >    at 
org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.notifyCheckpointComplete(RegularOperatorChain.java:145)
   >    at 
org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.notifyCheckpoint(SubtaskCheckpointCoordinatorImpl.java:479)
   >    at 
org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.notifyCheckpointComplete(SubtaskCheckpointCoordinatorImpl.java:412)
   >    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.notifyCheckpointComplete(StreamTask.java:1578)
   >    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$notifyCheckpointCompleteAsync$20(StreamTask.java:1519)
   >    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$notifyCheckpointOperation$23(StreamTask.java:1558)
   >    at 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50)
   >    at 
org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:118)
   >    at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMail(MailboxProcessor.java:415)
   >    at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMailsNonBlocking(MailboxProcessor.java:400)
   >    at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:362)
   >    at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:229)
   >    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:980)
   >    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:917)
   >    at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:963)
   >    at 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:942)
   >    at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:756)
   >    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:568)
   >    at java.base/java.lang.Thread.run(Unknown Source)
   > Caused by: java.lang.RuntimeException: Fail to create table or get table: 
default.my_orders
   >    at 
org.apache.paimon.iceberg.IcebergRestMetadataCommitter.commitMetadataImpl(IcebergRestMetadataCommitter.java:167)
   >    at 
org.apache.paimon.iceberg.IcebergRestMetadataCommitter.commitMetadata(IcebergRestMetadataCommitter.java:121)
   >    ... 32 more
   > Caused by: java.lang.NullPointerException
   >    at 
org.apache.paimon.iceberg.IcebergRestMetadataCommitter.checkBase(IcebergRestMetadataCommitter.java:355)
   >    at 
org.apache.paimon.iceberg.IcebergRestMetadataCommitter.commitMetadataImpl(IcebergRestMetadataCommitter.java:154)
   >    ... 33 more
   > ```
   > 
   > It seems code logic thinks - there is some `currentSnapshot` with 
snapshotId. All what I see that Flink has created is a metadata file in JSON 
format in the REST Catalog warehouse S3 folder. One of the field inside the 
metadata file on is:
   > 
   > ```json
   > "current-snapshot-id":-1
   > ```
   
   @novakov-alexey Hi, thanks a lot for your feedback. Could you please create 
an issue to track this bug? It would be helpful if you could provide minimal 
reproduction steps, I will try to reproduce this case and fix it.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to