That error could mean different things, most of the time is that the JVM
crashed . If you are running yarn check the yarn logs or the stderr of your
spark job to see if there is any more details of the cause

On Fri, 19 Nov 2021 at 15:25, Joris Billen <joris.bil...@bigindustries.be>
wrote:

> Hi,
> we are seeing this error:
>
> Job aborted due to stage failure: Task 0 in stage 1.0 failed 8...Reason: 
> *Container
> from a bad node: container_xxx on host: dev-yyy Exit status: 134*
>
> This post suggests this has to do with blacklisted nodes:
>
> https://stackoverflow.com/questions/65889696/spark-exit-status-134-what-does-it-mean
>  but
> in the spark ui, all executors say blacklisted=0.  ALso on that same
> cluster, many other jobs are running happilty so I dont believe the nodes
> are “corrupted”.
>
>
>
> Thanks for input!
>

Reply via email to