Github user sounakr commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1455#discussion_r149577349
  
    --- Diff: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/LoadTableCommand.scala
 ---
    @@ -84,6 +84,44 @@ case class LoadTableCommand(
     
         val carbonProperty: CarbonProperties = CarbonProperties.getInstance()
         carbonProperty.addProperty("zookeeper.enable.lock", "false")
    +
    +    val numCoresLoading =
    +      try {
    +        Integer.parseInt(CarbonProperties.getInstance()
    +            .getProperty(CarbonCommonConstants.NUM_CORES_LOADING,
    +                CarbonCommonConstants.NUM_CORES_MAX_VAL.toString()))
    +      } catch {
    +        case exc: NumberFormatException =>
    +          LOGGER.error("Configured value for property " + 
CarbonCommonConstants.NUM_CORES_LOADING
    +              + " is wrong. ")
    +          CarbonCommonConstants.NUM_CORES_MAX_VAL
    +      }
    +
    +    val newNumCoresLoading =
    +      if (sparkSession.sparkContext.conf.contains("spark.executor.cores")) 
{
    +        // If running on yarn,
    +        // get the minimum value of 'spark.executor.cores' and 
NUM_CORES_LOADING,
    +        // If user set the NUM_CORES_LOADING, it can't exceed the value of 
'spark.executor.cores';
    +        // If user doesn't set the NUM_CORES_LOADING, it will use the 
value of
    +        // 'spark.executor.cores', but the value can't exceed the value of 
NUM_CORES_MAX_VAL,
    +        // NUM_CORES_LOADING's default value is NUM_CORES_MAX_VAL;
    +        Math.min(
    +          sparkSession.sparkContext.conf.getInt("spark.executor.cores", 1),
    --- End diff --
    
    Rather than taking a minimum value of either spark.executor.cores or 
NUM_CORES_LOADING better to give **precedence** to one of them. In case 
NUM_CORES_LOADING set in carbonproperties then better to take it from there. 
Otherwise it might be confusing for the end user.


---

Reply via email to