Hi all,
I am running into the following error with spark 2.4.8
Job aborted due to stage failure: Task 9 in stage 2.0 failed 4 times, most
> recent failure: Lost task 9.3 in stage 2.0 (TID 100, 10.221.8.73, executor
> 2): org.apache.http.ConnectionClosedException: Premature end of
>
record. Pls
suggest if any thing I can tune here, and mine 5 node spark cluster
Bala
> On Jan 21, 2016, at 7:07 PM, Silvio Fiorito <silvio.fior...@granturing.com>
> wrote:
>
> Also, just to clarify it doesn’t read the whole table into memory unless you
> specifically cache