QiangCai commented on a change in pull request #3899:
URL: https://github.com/apache/carbondata/pull/3899#discussion_r477183265



##########
File path: 
integration/spark/src/test/scala/org/apache/carbondata/integration/spark/testsuite/dataload/TestLoadDataGeneral.scala
##########
@@ -310,7 +310,8 @@ class TestLoadDataGeneral extends QueryTest with 
BeforeAndAfterEach {
     val tableStatusFile = 
CarbonTablePath.getTableStatusFilePath(carbonTable.getTablePath)
     FileFactory.getCarbonFile(tableStatusFile).delete()
     sql("insert into stale values('k')")
-    checkAnswer(sql("select * from stale"), Row("k"))
+    // if table lose tablestatus file, the system should keep all data.
+    checkAnswer(sql("select * from stale"), Seq(Row("k"), Row("k")))

Review comment:
       yes
   1. now, after removing the tablestatus file,  we can consider all segments 
are successful by default.
   2. in the future, we can add a flag file into the segment when we mark the 
segment to be deleted.
      the flag file contains a message the segment is mark_for_delete.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to