Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/12113#discussion_r62332230
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -428,40 +503,89 @@ private[spark] class MapOutputTrackerMaster(conf:
SparkConf)
}
}
+ private def removeBroadcast(bcast: Broadcast[_]): Unit = {
+ if (null != bcast) {
+ broadcastManager.unbroadcast(bcast.id,
+ removeFromDriver = true, blocking = false)
+ }
+ }
+
+ private def clearCachedBroadcast(): Unit = {
+ for (cached <- cachedSerializedBroadcast) removeBroadcast(cached._2)
+ cachedSerializedBroadcast.clear()
+ }
+
def getSerializedMapOutputStatuses(shuffleId: Int): Array[Byte] = {
var statuses: Array[MapStatus] = null
+ var retBytes: Array[Byte] = null
var epochGotten: Long = -1
- epochLock.synchronized {
- if (epoch > cacheEpoch) {
- cachedSerializedStatuses.clear()
- cacheEpoch = epoch
- }
- cachedSerializedStatuses.get(shuffleId) match {
- case Some(bytes) =>
- return bytes
- case None =>
- statuses = mapStatuses.getOrElse(shuffleId, Array[MapStatus]())
- epochGotten = epoch
+
+ // Check to see if we have a cached version, returns true if it does
+ // and has side effect of setting retBytes. If not returns false
+ // with side effect of setting statuses
+ def checkCachedStatuses(): Boolean = {
+ epochLock.synchronized {
+ if (epoch > cacheEpoch) {
+ cachedSerializedStatuses.clear()
+ clearCachedBroadcast()
+ cacheEpoch = epoch
+ }
+ cachedSerializedStatuses.get(shuffleId) match {
+ case Some(bytes) =>
+ retBytes = bytes
+ true
+ case None =>
+ logDebug("cached status not found for : " + shuffleId)
+ statuses = mapStatuses.getOrElse(shuffleId, Array[MapStatus]())
+ epochGotten = epoch
+ false
+ }
}
}
- // If we got here, we failed to find the serialized locations in the
cache, so we pulled
- // out a snapshot of the locations as "statuses"; let's serialize and
return that
- val bytes = MapOutputTracker.serializeMapStatuses(statuses)
- logInfo("Size of output statuses for shuffle %d is %d
bytes".format(shuffleId, bytes.length))
- // Add them into the table only if the epoch hasn't changed while we
were working
- epochLock.synchronized {
- if (epoch == epochGotten) {
- cachedSerializedStatuses(shuffleId) = bytes
+
+ if (checkCachedStatuses()) return retBytes
+ var shuffleIdLock = shuffleIdLocks.get(shuffleId)
+ if (null == shuffleIdLock) {
+ val newLock = new Object()
+ // in general, this condition should be false - but good to be
paranoid
+ val prevLock = shuffleIdLocks.putIfAbsent(shuffleId, newLock)
--- End diff --
Its purely defensive programming to allow things to work when the
unexpected happen. Would you rather have your production job that was running
for 5 hours throw a null pointer exception or try to fix itself and continue to
run?
In distributed systems weird things happen and this is processing a message
from another host/task which you don't have direct control of. You can get
network breaks, weird host failures or pauses, etc and a message comes in late
asking for a shuffle id that isn't there anymore.
The unregister shuffle which removes the lock for the shuffle id is being
called from the context cleaner. So if an RDD goes out of scope and is cleaned
up the shuffle lock gets removed. As I mention above if some host was slightly
out of sync and sent a message to fetch that id late, we would throw a null
pointer exception. Everything else in the GetMapOutputStatuses handle this
case and there is actually a test for this (fetching after unregister) so if
this line is removed that test fails.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]