Any response on this one?
On Fri, Oct 13, 2017 at 1:01 AM, Tarun Kumar wrote:
> I am specifying executer host for my tasks via
> org.apache.hadoop.fs.LocatedFileStatus#locations
> (which is org.apache.hadoop.fs.BlockLocation[])'s hosts field, still
> tasks are getting sched
I am specifying executer host for my tasks via
org.apache.hadoop.fs.LocatedFileStatus#locations
(which is org.apache.hadoop.fs.BlockLocation[])'s hosts field, still tasks
are getting scheduled on different executor host.
- Speculative execution is off.
- Also confirmed that locality wait time out
Any response on this one? Thanks in advance!
On Thu, Oct 5, 2017 at 1:44 PM, Tarun Kumar wrote:
> Hi, I registered an accumulator in driver via
> sparkContext.register(myCustomAccumulator,
> "accumulator-name"). But this accumulator is not available in
> task.metrics
Hi, I registered an accumulator in driver via
sparkContext.register(myCustomAccumulator, "accumulator-name"). But this
accumulator is not available in task.metrics.accumulators() list.
Accumulator is not visible in spark UI as well.
Does spark need different configuration to make accumulator visib