dailai commented on issue #6831:
URL: https://github.com/apache/seatunnel/issues/6831#issuecomment-2114098916

   > I tried to explain why I use JDBC source connector and batch mode, but 
perhaps I wasn't clear. Well, there's a bug that only occurs in seatunnel's 
batch mode (CDC works well) : when I wrote data in batch mode using seatunnel's 
paimon sink, paimon didn't keep the latest record although the table with PK 
using deduplicate merge engine. However, both flink's batch mode and spark 
handle the same table and data correctly. So I think paimon's deduplicate merge 
engine should work even if there are only inserts without update events. It 
might be helpful to run a simple test to see if this is the case.
   
   I don't think this has anything to do with the paimon sink. In batch mode, 
the jdbc source only reads the data at the moment it executes the jdbc query. 
After the data is read, it is sent downstream. No matter the source is updated 
or inserted, it will not be synchronized to the downstream.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to