warrenzhu25 opened a new pull request, #36717: URL: https://github.com/apache/spark/pull/36717
What changes were proposed in this pull request? Add check for total executor cores when SetReaderPartitions message received. Why are the changes needed? In continuous processing mode, EpochCoordinator won't add offsets to query until got ReportPartitionOffset from all partitions. Normally, each kafka topic partition will be handled by one core, if total cores is smaller than total kafka topic partition counts, the job will hang without any error message. Does this PR introduce any user-facing change? Yes, if total executor cores is smaller than total kafka partition count, the exception with below error will be thrown: ` Total $numReaderPartitions (kafka partitions) * $cpusPerTask (cpus per task) = $neededCores needed, but only have min($numExecutors (executors) * $coresPerExecutor (cores per executor) = $totalRequestedCores, spark.cores.max = $maxCores) = $totalAvailableCores (total cores). Please increase total number of executor cores to at least $neededCores. ` How was this patch tested? Added test in EpochCoordinatorSuite -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
