jackylk commented on a change in pull request #3540: 
[CARBONDATA-3639][CARBONDATA-3638] Fix global sort exception in load from CSV 
flow with binary non-sort columns
URL: https://github.com/apache/carbondata/pull/3540#discussion_r361846360
 
 

 ##########
 File path: 
integration/spark-common/src/main/scala/org/apache/carbondata/spark/load/DataLoadProcessBuilderOnSpark.scala
 ##########
 @@ -88,11 +90,24 @@ object DataLoadProcessBuilderOnSpark {
 
     val conf = SparkSQLUtil.broadCastHadoopConf(sc, hadoopConf)
     // 1. Input
-    val inputRDD = originRDD
-      .mapPartitions(rows => DataLoadProcessorStepOnSpark.toRDDIterator(rows, 
modelBroadcast))
-      .mapPartitionsWithIndex { case (index, rows) =>
-        DataLoadProcessorStepOnSpark.inputFunc(rows, index, modelBroadcast, 
inputStepRowCounter)
+    val inputRDD = if (isLoadFromCSV) {
+      // No need of wrap with NewRDDIterator, which converts object to string,
+      // as it is already a string.
+      // So, this will avoid new object creation in case of CSV global sort 
load for each row
+      originRDD.mapPartitionsWithIndex { case (index, rows) => 
DataLoadProcessorStepOnSpark
 
 Review comment:
   Move DataLoadProcessorStepOnSpark to next line

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to