viirya commented on a change in pull request #30175:
URL: https://github.com/apache/spark/pull/30175#discussion_r564842489



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/continuous/EpochCoordinator.scala
##########
@@ -267,4 +275,16 @@ private[continuous] class EpochCoordinator(
       queryWritesStopped = true
       context.reply(())
   }
+
+  private def checkTotalCores(): Unit = {
+    val numExecutors = session.conf.get("spark.executor.instances", "1").toInt
+    val coresPerExecutor = session.conf.get("spark.executor.cores", "1").toInt
+    val totalCores = numExecutors * coresPerExecutor
+    logDebug(s"Check total cores $totalCores and kafka partition number 
$numReaderPartitions")

Review comment:
       Don't we need to consider `spark.task.cpus`? I think the actual number 
of tasks is decided by executor instance, cores and task cpus, right?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to