Not much information in the attachment.
There was TimeoutException w.r.t. BlockManagerMaster.removeRdd().
Any chance of more logs ?
Thanks
On Thu, Jun 2, 2016 at 2:07 AM, Vishnu Nair wrote:
> Hi Ted
>
> We use Hadoop 2.6 & Spark 1.3.1. I also attached the error file
Hi,
Few things for closer examination:
* is yarn master URL accepted in 1.3? I thought it was only in later
releases. Since you're seeing the issue it seems it does work.
* I've never seen specifying confs using a single string. Can you check in
the Web ui they're applied?
* what about this
Hi Ted
We use Hadoop 2.6 & Spark 1.3.1. I also attached the error file to this
mail, please have a look at it.
Thanks
On Thu, Jun 2, 2016 at 11:51 AM, Ted Yu wrote:
> Can you show the error in bit more detail ?
>
> Which release of hadoop / Spark are you using ?
>
> Is
Can you show the error in bit more detail ?
Which release of hadoop / Spark are you using ?
Is CapacityScheduler being used ?
Thanks
On Thu, Jun 2, 2016 at 1:32 AM, Prabeesh K. wrote:
> Hi I am using the below command to run a spark job and I get an error like
>
Hi I am using the below command to run a spark job and I get an error like
"Container preempted by scheduler"
I am not sure if it's related to the wrong usage of Memory:
nohup ~/spark1.3/bin/spark-submit \ --num-executors 50 \ --master yarn \
--deploy-mode cluster \ --queue adhoc \