Github user koeninger commented on the pull request:

    https://github.com/apache/spark/pull/3849#issuecomment-68407388
  
    The flip side is that it's already documented as doing the "right" thing:
    
    
http://spark.apache.org/docs/1.1.1/api/scala/index.html#org.apache.spark.TaskContext
    
    val attemptId: Long
    
    the number of attempts to execute this task
    
    On Tue, Dec 30, 2014 at 4:38 PM, Patrick Wendell <[email protected]>
    wrote:
    
    > So personally I don't think we should change the semantics of attemptId
    > because this has been exposed to user applications and they could silently
    > break if we modify the meaning of the field (my original JIRA referred to
    > an internal use of this). What it means right now is "a global GUID over
    > all attempts" - that is a bit of an awkward definition, but I don't think
    > it's fair to call this a bug - it was just a weird definition.
    >
    > So I'd be in favor of deprecating this in favor of taskAttemptId (a new
    > field) and say that it was renamed to avoid confusion. Then we can add
    > another field, attemptCount or attemptNum or something to convey the more
    > intuitive thing.
    >
    > It will be slightly awkward, but if anyone reads the docs it should be
    > obvious. In fact, we should probably spruce up the docs here for things
    > like partitionID which right now are probably not super clear to users.
    >
    > —
    > Reply to this email directly or view it on GitHub
    > <https://github.com/apache/spark/pull/3849#issuecomment-68406594>.
    >


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to