akashrn5 commented on a change in pull request #3793:
URL: https://github.com/apache/carbondata/pull/3793#discussion_r443361855
##########
File path:
integration/spark/src/main/scala/org/apache/spark/sql/execution/command/mutation/merge/CarbonMergeDataSetCommand.scala
##########
@@ -269,11 +271,10 @@ case class CarbonMergeDataSetCommand(
new SparkCarbonFileFormat().prepareWrite(sparkSession, job,
Map(), schema)
val config = SparkSQLUtil.broadCastHadoopConf(sparkSession.sparkContext,
job.getConfiguration)
-
(frame.rdd.coalesce(DistributionUtil.getConfiguredExecutors(sparkSession.sparkContext)).
- mapPartitionsWithIndex { case (index, iter) =>
+ (frame.rdd.mapPartitionsWithIndex { case (index, iter) =>
CarbonProperties.getInstance().addProperty(CarbonLoadOptionConstants
.ENABLE_CARBON_LOAD_DIRECT_WRITE_TO_STORE_PATH, "true")
- val confB = config.value.value
+ val confB = new Configuration(config.value.value)
Review comment:
i think adding new conf for it is not correct we need to analyze
properly, may be you can revert these changes and we can handle during other
cdc optimizations
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]