Try increasing the value of spark.yarn.executor.memoryOverhead. It’s default
value is 384mb in spark 1.1. This error generally comes when your process usage
exceed your max allocation. Use following property to increase memory overhead.
From: Yifan LI mailto:iamyifa...@gmail.com>>
Date: Friday,
;
Date: Saturday, 7 February 2015 1:22 am
To: Praveen Garg mailto:praveen.g...@guavus.com>>
Cc: Raghavendra Pandey
mailto:raghavendra.pan...@gmail.com>>,
"user@spark.apache.org<mailto:user@spark.apache.org>"
mailto:user@spark.apache.org>>
Subject: Re: Shuffl
We tried changing the compression codec from snappy to lz4. It did improve the
performance but we are still wondering why default options didn’t work as
claimed.
From: Raghavendra Pandey
mailto:raghavendra.pan...@gmail.com>>
Date: Friday, 6 February 2015 1:23 pm
To: Pravee
Hi,
While moving from spark 1.1 to spark 1.2, we are facing an issue where Shuffle
read/write has been increased significantly. We also tried running the job by
rolling back to spark 1.1 configuration where we set spark.shuffle.manager to
hash and spark.shuffle.blockTransferService to nio. It d