Github user gvramana commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1681#discussion_r157828690
  
    --- Diff: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonLoadDataCommand.scala
 ---
    @@ -488,7 +490,24 @@ case class CarbonLoadDataCommand(
             }
             InternalRow.fromSeq(data)
           }
    -      LogicalRDD(attributes, rdd)(sparkSession)
    +      if (updateModel.isDefined) {
    +        sparkSession.sparkContext.setLocalProperty(EXECUTION_ID_KEY, null)
    +        // In case of update, we don't need the segmrntid column in case 
of partitioning
    +        val dropAttributes = attributes.dropRight(1)
    +        val finalOutput = relation.output.map { attr =>
    +          dropAttributes.find { d =>
    +            val index = d.name.lastIndexOf("-updatedColumn")
    --- End diff --
    
    find a better way to get the columns in order from update flow instead of 
doing string manipulations



---

Reply via email to