Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11436#discussion_r54661043
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -315,6 +315,34 @@ abstract class RDD[T: ClassTag](
}
/**
+ * Gets or computes an RDD partition. Used by RDD.iterator() when an RDD
is cached.
+ */
+ private[spark] def getOrCompute(partition: Partition, context:
TaskContext): Iterator[T] = {
+ val blockId = RDDBlockId(id, partition.index)
+ var readCachedBlock = true
+ SparkEnv.get.blockManager.getOrElseUpdate(blockId, storageLevel, () =>
{
+ readCachedBlock = false
+ computeOrReadCheckpoint(partition, context)
+ }) match {
+ case Left(blockResult) =>
+ if (readCachedBlock) {
+ val existingMetrics =
context.taskMetrics().registerInputMetrics(blockResult.readMethod)
+ existingMetrics.incBytesReadInternal(blockResult.bytes)
+ new InterruptibleIterator[T](context,
blockResult.data.asInstanceOf[Iterator[T]]) {
+ override def next(): T = {
+ existingMetrics.incRecordsReadInternal(1)
+ delegate.next()
--- End diff --
I considered this but had some reservations due to the fact that we might
not want to consider every read of a BlockManager iterator to be a read of an
input record (since we also read from iterators in the implementation of
getSingle() and in a couple of other places, and those methods are used for
things like TorrentBroadcast blocks which I don't think we want to treat as
input records in for the purposes of these metrics).
On the other hand, this `getOrElseUpdate` method really only makes sense in
the context of caching code anyways, so maybe it's fine to couple its semantics
a little more tightly to its only current use.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]