Possibilities:
- You are using more memory now (and not getting killed), but now are
exceeding OS memory and are swapping
- Your heap sizes / config aren't quite right and now, instead of
failing earlier because YARN killed the job, you're running normally
but seeing a lot of time lost to GC thras
Hello Experts,
For one of our streaming appilcation, we intermittently saw:
WARN yarn.YarnAllocator: Container killed by YARN for exceeding memory
limits. 12.0 GB of 12 GB physical memory used. Consider boosting
spark.yarn.executor.memoryOverhead.
Based on what I found on internet and the error