Github user xuchuanyin commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2851#discussion_r228010573
  
    --- Diff: 
integration/spark-common/src/main/scala/org/apache/carbondata/events/DataMapEvents.scala
 ---
    @@ -60,7 +60,8 @@ case class BuildDataMapPreExecutionEvent(sparkSession: 
SparkSession,
      * example: bloom datamap, Lucene datamap
      */
     case class BuildDataMapPostExecutionEvent(sparkSession: SparkSession,
    -    identifier: AbsoluteTableIdentifier, segmentIdList: Seq[String], 
isFromRebuild: Boolean)
    +    identifier: AbsoluteTableIdentifier, segmentIdList: Seq[String],
    +    isFromRebuild: Boolean, dmName: String)
    --- End diff --
    
    You can adjust the sequence of the parameters by moving `dmName` after 
`identifier`. This will make the method easy to understand: for some table's 
some datamap, for the corresponding segments doing rebuild or not


---

Reply via email to