ajantha-bhat commented on a change in pull request #3927:
URL: https://github.com/apache/carbondata/pull/3927#discussion_r489299596
##########
File path:
integration/spark/src/main/scala/org/apache/carbondata/spark/rdd/CarbonDataRDDFactory.scala
##########
@@ -725,43 +725,53 @@ object CarbonDataRDDFactory {
val metadataDetails =
SegmentStatusManager.readTableStatusFile(
CarbonTablePath.getTableStatusFilePath(carbonTable.getTablePath))
+ val updateTableStatusFile =
CarbonUpdateUtil.getUpdateStatusFileName(updateModel
+ .updatedTimeStamp.toString)
+ val updatedSegments =
SegmentUpdateStatusManager.readLoadMetadata(updateTableStatusFile,
+ carbonTable.getTablePath).map(_.getSegmentName).toSet
val segmentFiles = segmentDetails.asScala.map { seg =>
- val load =
- metadataDetails.find(_.getLoadName.equals(seg.getSegmentNo)).get
- val segmentFile = load.getSegmentFile
- var segmentFiles: Seq[CarbonFile] = Seq.empty[CarbonFile]
+ // create new segment files and merge for only updated segments
Review comment:
I thought you will strictly follow to clean up the base code in the area
of modification to make it more readable and maintainable!
here `seg`, `file`, `carbonFile` are not good variable names, you can check
and refactor if you want
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]