This is running in YARN cluster mode. It was restarted automatically and
continued fine.
I was trying to see what went wrong. AFAIK there were no task failure.
Nothing in executor logs. The log I gave is in driver.
After some digging, I did see that there was a rebalance in kafka logs
around this
Does restarting after a few minutes solves the problem? Could be a
transient issue that lasts long enough for spark task-level retries to all
fail.
On Tue, Feb 7, 2017 at 4:34 PM, Srikanth wrote:
> Hello,
>
> I had a spark streaming app that reads from kafka running for a
Hello,
I had a spark streaming app that reads from kafka running for a few hours
after which it failed with error
*17/02/07 20:04:10 ERROR JobScheduler: Error generating jobs for time
148649785 ms
java.lang.IllegalStateException: No current assignment for partition mt_event-5
at