Github user ravipesala commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1583#discussion_r154119145
  
    --- Diff: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/RefreshCarbonTableCommand.scala
 ---
    @@ -0,0 +1,264 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.spark.sql.execution.command.management
    +
    +import java.util
    +
    +import scala.collection.JavaConverters._
    +
    +import org.apache.spark.sql._
    +import org.apache.spark.sql.execution.command.MetadataCommand
    +import org.apache.spark.sql.util.CarbonException
    +
    +import org.apache.carbondata.common.logging.{LogService, LogServiceFactory}
    +import org.apache.carbondata.core.constants.CarbonCommonConstants
    +import org.apache.carbondata.core.datastore.impl.FileFactory
    +import org.apache.carbondata.core.locks.{ICarbonLock, LockUsage}
    +import org.apache.carbondata.core.metadata.{AbsoluteTableIdentifier, 
CarbonTableIdentifier}
    +import 
org.apache.carbondata.core.metadata.converter.ThriftWrapperSchemaConverterImpl
    +import org.apache.carbondata.core.metadata.schema.table.{DataMapSchema, 
TableInfo}
    +import org.apache.carbondata.core.metadata.schema.table.column.ColumnSchema
    +import org.apache.carbondata.core.util.CarbonProperties
    +import org.apache.carbondata.core.util.path.CarbonStorePath
    +import org.apache.carbondata.events.{CreateTablePreExecutionEvent, 
OperationContext, OperationListenerBus, RefreshTablePostExecutionEvent, 
RefreshTablePreExecutionEvent}
    +import org.apache.carbondata.spark.util.CommonUtil
    +
    +/**
    + * Command to register carbon table from existing carbon table data
    + */
    +case class RefreshCarbonTableCommand(
    +    dbName: Option[String],
    +    tableName: String)
    +  extends MetadataCommand {
    +  val LOGGER: LogService =
    +    LogServiceFactory.getLogService(this.getClass.getName)
    +
    +  override def processMetadata(sparkSession: SparkSession): Seq[Row] = {
    +    val metaStore = CarbonEnv.getInstance(sparkSession).carbonMetastore
    +    val databaseName = GetDB.getDatabaseName(dbName, sparkSession)
    +    val databaseLocation = GetDB.getDatabaseLocation(databaseName, 
sparkSession,
    +      CarbonProperties.getStorePath)
    +    // Steps
    +    // 1. get table path
    +    // 2. perform the below steps
    +    // 2.1 check if the table already register with hive then ignore and 
continue with the next
    +    // schema
    +    // 2.2 register the table with the hive check if the table being 
registered has aggregate table
    +    // then do the below steps
    +    // 2.2.1 validate that all the aggregate tables are copied at the 
store location.
    +    // 2.2.2 Register the aggregate tables
    +    val locksToBeAcquired = List(LockUsage.METADATA_LOCK, 
LockUsage.DROP_TABLE_LOCK)
    --- End diff --
    
    why locks are required to take here? we are not updating any files right, 
we are just updating to DB, I think concurrent scenarios it can take care 
internally.


---

Reply via email to