Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2087#issuecomment-57229133
Hey @sryza so it seems like there are two things going on here. One is
adding incremental update and the other is changing the way we deal with
tracking read bytes for Hadoop RDD's. For the incremental updates, could we
just make bytes read an atomic long and update it directly inside of the
`compute` functions - this seems simpler than using callbacks? For instance,
what if we just update the bytes read every N records by reading from the
thread local information.
The current approach couples the updating of this metric with the
heartbeats in a way that seems strange. In fact, is `updatebytesRead` ever
called here if the heartbeats are disabled or are very long? And don't we need
to `updateBytesRead` once the task finishes... for instance, more bytes could
have been read after the most recent heartbeat was sent, right? If we did an
approach that updated it every N records and then when the entire partition was
computed, it is easier to reason about the order of updates.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]