Ngone51 commented on a change in pull request #30175:
URL: https://github.com/apache/spark/pull/30175#discussion_r579316690



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/continuous/EpochCoordinator.scala
##########
@@ -267,4 +268,23 @@ private[continuous] class EpochCoordinator(
       queryWritesStopped = true
       context.reply(())
   }
+
+  private def checkTotalCores(): Unit = {
+    val INSTRUCTION_FOR_FEWER_CORES =
+      """
+        |Total %s (kafka partitions) * %s (cpus per task) = %s needed,
+        |but only have %s (executors) * %s (cores per executor) = %s (total 
cores).
+        |Please increase total number of executor cores to at least %s.
+    """.stripMargin
+    val numExecutors = session.conf.get("spark.executor.instances", "1").toInt

Review comment:
       This's not always valid for all cluster managers. For example, the 
number of executors may be decided by the `spark.cores.max` in Standalone.
   
   We can get the number of executors by `BlockManagerMaster.getPeers()`.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to