Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1831#discussion_r165544265
--- Diff: conf/carbon.properties.template ---
@@ -76,22 +72,16 @@ carbon.enable.quick.filter=false
#carbon.block.meta.size.reserved.percentage=10
##csv reading buffer size.
#carbon.csv.read.buffersize.byte=1048576
-##To identify and apply compression for non-high cardinality columns
-#high.cardinality.value=100000
##maximum no of threads used for reading intermediate files for final
merging.
#carbon.merge.sort.reader.thread=3
##Carbon blocklet size. Note: this configuration cannot be change once
store is generated
#carbon.blocklet.size=120000
-##number of retries to get the metadata lock for loading data to table
-#carbon.load.metadata.lock.retries=3
##Minimum blocklets needed for distribution.
#carbon.blockletdistribution.min.blocklet.size=10
##Interval between the retries to get the lock
#carbon.load.metadata.lock.retry.timeout.sec=5
##Temporary store location, By default it will take
System.getProperty("java.io.tmpdir")
-#carbon.tempstore.location=/opt/Carbon/TempStoreLoc
-##data loading records count logger
-#carbon.load.log.counter=500000
+#carbon.tempstore.location
--- End diff --
Are we really using this? I think we always depends on eith java tmp dir or
get tmp directoris from spark/yarn. Please reverify and remove if not used
---