Github user zzcclp commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/1455#discussion_r149583968
--- Diff:
integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/LoadTableCommand.scala
---
@@ -84,6 +84,44 @@ case class LoadTableCommand(
val carbonProperty: CarbonProperties = CarbonProperties.getInstance()
carbonProperty.addProperty("zookeeper.enable.lock", "false")
+
+ val numCoresLoading =
+ try {
+ Integer.parseInt(CarbonProperties.getInstance()
+ .getProperty(CarbonCommonConstants.NUM_CORES_LOADING,
+ CarbonCommonConstants.NUM_CORES_MAX_VAL.toString()))
+ } catch {
+ case exc: NumberFormatException =>
+ LOGGER.error("Configured value for property " +
CarbonCommonConstants.NUM_CORES_LOADING
+ + " is wrong. ")
+ CarbonCommonConstants.NUM_CORES_MAX_VAL
+ }
+
+ val newNumCoresLoading =
+ if (sparkSession.sparkContext.conf.contains("spark.executor.cores"))
{
+ // If running on yarn,
+ // get the minimum value of 'spark.executor.cores' and
NUM_CORES_LOADING,
+ // If user set the NUM_CORES_LOADING, it can't exceed the value of
'spark.executor.cores';
+ // If user doesn't set the NUM_CORES_LOADING, it will use the
value of
+ // 'spark.executor.cores', but the value can't exceed the value of
NUM_CORES_MAX_VAL,
+ // NUM_CORES_LOADING's default value is NUM_CORES_MAX_VAL;
+ Math.min(
+ sparkSession.sparkContext.conf.getInt("spark.executor.cores", 1),
--- End diff --
The purpose of taking the minimum value of spark.executor.cores and
NUM_CORES_LOADING is to ensure that the value of NUM_CORES_LOADING does not
exceed the 'spark.executor.cores'.
---