I also ran into this earlier. It is a bug. Do you want to file a jira?

I think part of the problem is that we don't actually have the attempt id
on the executors. If we do, that's great. If not, we'd need to propagate
that over.

On Mon, Oct 20, 2014 at 7:17 AM, Yin Huai <huaiyin....@gmail.com> wrote:

> Hello,
>
> Is there any way to get the attempt number in a closure? Seems
> TaskContext.attemptId actually returns the taskId of a task (see this
> <
> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/executor/Executor.scala#L181
> >
>  and this
> <
> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/Task.scala#L47
> >).
> It looks like a bug.
>
> Thanks,
>
> Yin
>

Reply via email to