ajantha-bhat commented on a change in pull request #4051:
URL: https://github.com/apache/carbondata/pull/4051#discussion_r544833038
##########
File path:
integration/spark/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonCleanFilesCommand.scala
##########
@@ -38,26 +40,33 @@ case class CarbonCleanFilesCommand(
isInternalCleanCall: Boolean = false)
extends DataCommand {
+ val LOGGER: Logger =
LogServiceFactory.getLogService(this.getClass.getCanonicalName)
+
override def processData(sparkSession: SparkSession): Seq[Row] = {
Checker.validateTableExists(databaseNameOp, tableName, sparkSession)
val carbonTable = CarbonEnv.getCarbonTable(databaseNameOp,
tableName)(sparkSession)
setAuditTable(carbonTable)
- // if insert overwrite in progress, do not allow delete segment
- if (SegmentStatusManager.isOverwriteInProgressInTable(carbonTable)) {
+ // if insert overwrite in progress and table not a MV, do not allow delete
segment
+ if (!carbonTable.isMV &&
SegmentStatusManager.isOverwriteInProgressInTable(carbonTable)) {
Review comment:
Not here, handle at the place where we call clean files for MV when
cleanfiles is called for maintable. Else if the user calls clean files on MV
table when concurrently insert overwrite is happening, now you don't throw an
exception. which is out of synch with main table clean files behavior.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]