Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/1931#discussion_r16222847
--- Diff: core/src/main/scala/org/apache/spark/ContextCleaner.scala ---
@@ -171,12 +203,44 @@ private[spark] class ContextCleaner(sc: SparkContext)
extends Logging {
}
}
+ /**
+ * Log the length of the reference queue through reflection.
+ * This is an expensive operation and should be called sparingly.
+ */
+ private def logQueueLength(): Unit = {
+ try {
+ queueLengthAccessor.foreach { field =>
+ val length = field.getLong(referenceQueue)
+ logDebug("Reference queue size is " + length)
+ if (length > queueCapacity) {
+ logQueueFullErrorMessage()
+ }
+ }
+ } catch {
+ case e: Exception =>
+ logDebug("Failed to access reference queue's length through
reflection: " + e)
+ }
+ }
+
+ /**
+ * Log an error message to indicate that the queue has exceeded its
capacity. Do this only once.
+ */
+ private def logQueueFullErrorMessage(): Unit = {
+ if (!queueFullErrorMessageLogged) {
+ queueFullErrorMessageLogged = true
+ logError(s"Reference queue size in ContextCleaner has exceeded
$queueCapacity! " +
--- End diff --
I am not sure whether this should be logError. Its not like the system is
immediately tipping over because of it reached this capacity. I think it should
be a logWarning.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]