xuzifu666 commented on issue #10542:
URL: https://github.com/apache/hudi/issues/10542#issuecomment-1979941864

   > hey @xuzifu666 : do you happened to have the old data intact which had 
data loss. We would like to root cause this. 0.x release line will be used by 
lot of OSS users. So, we really wanna get to the bottom of it and fix it.
   > 
   > Would greatly appreciate if you an help us triage this.
   > 
   > * Do you happened to know when exactly the data loss happens. do you see 
anything interesting in the timeline around the time the data loss happens.
   > * Is it a single writer or multi-writer.
   > * We do have some suspicion around log record reading that we are chasing. 
Ref ticket: [[SUPPORT] Data loss due to incorrect selection of log file during 
compaction #10803](https://github.com/apache/hudi/issues/10803) But I do not 
want to bias this one. lets get more info about when exactly data loss is seen.
   > * Are there any task retries in general. I am not familiar w/ flink. But 
in spark, we might have spark task retries. Are there any such things happening 
in your pipeline.
   > * Is it happening across all pipelines occasionally or very few pipelines. 
And if its very few, is there any common characteristics like index type, 
metadata enabled, etc. in comparison to other pipelines which does not have the 
data loss issue.
   > * And can you confirm that these pipelines were running w/o any issues w/ 
older versions of hudi.
   > * Do you happened to reproduce this in a deterministic manner?
   
   Hi @nsivabalan Thanks for your attention, according to your raised 
conditions,I list as follow:
   1. From all the loss record timestamp,It would happend arround flink job 
checkpoint finished,but job state is ok,no exception in timeline. because this 
it is hard to tag the root.
   2. In our case,dataloss happend in single write job.
   3. https://github.com/apache/hudi/issues/10803 the issue had read 
recently,but it produce in compaction sence,we had test in all sences about:a. 
flink job with compaction online; b. flink job without compaction c.flink job 
with compaction by spark compaction sync.   These scences all could happend 
dataloss.
   4. All the time job is stable without any exception. No any retried during 
the running time.
   5. Pipline is about 4 or 5 number size,and we did not use mdt,table type is 
mor,index type is bucket.
   6. We use Hudi version is 0.14.0
   7. Since now we had get a deterministic manner to reproduce it because job 
state is very well and timeline state is OK.
   If you have any other questions can leave anytime.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to