Re: Debugging Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

2017-09-26 Thread Sathishkumar Manimoorthy
@Ayan It seems to be running on spark standalone. Not mostly on Yarn I guess. Thanks, Sathish On Tue, Sep 26, 2017 at 9:09 PM, ayan guha wrote: > I would check the queue you are submitting job, assuming it is yarn... > > On Tue, Sep 26, 2017 at 11:40 PM, JG Perrin

RE: Debugging Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

2017-09-26 Thread JG Perrin
not using Yarn, just standalone cluster with 2 nodes here (physical, not even VM). network seems good between the nodes . From: ayan guha [mailto:guha.a...@gmail.com] Sent: Tuesday, September 26, 2017 10:39 AM To: JG Perrin Cc: user@spark.apache.org Subject: Re: Debugging

Re: Debugging Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

2017-09-26 Thread ayan guha
I would check the queue you are submitting job, assuming it is yarn... On Tue, Sep 26, 2017 at 11:40 PM, JG Perrin wrote: > Hi, > > > > I get the infamous: > > Initial job has not accepted any resources; check your cluster UI to > ensure that workers are registered and have

Re: partitionBy causing OOM

2017-09-26 Thread Amit Sela
Thanks for all the answers! It looks like increasing the heap a little, and setting spark.sql. shuffle.partitions to a much lower number (I used the recommended input_size_mb/128 formula) did the trick. As for partitionBy, unless I use repartition("dt") before the writer, it actually writes more

Debugging Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

2017-09-26 Thread JG Perrin
Hi, I get the infamous: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources I run the app via Eclipse, connecting: SparkSession spark = SparkSession.builder() .appName("Converter -

PySpark: Overusing allocated cores / too many processes

2017-09-26 Thread Fabian Böhnlein
Hi all, above topic has been mentioned before in this list between March - June 2016 , again mentioned