Re: 21/09/27 23:34:03 WARN YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

2021-09-27 Thread Stelios Philippou
It might be possible that you do not have the resources on the cluster. So your job will remain to wait for them as they cannot be provided. On Tue, 28 Sep 2021, 04:26 davvy benny, wrote: > How can I solve the problem? > > On 2021/09/27 23:05:41, Thejdeep G wrote: > > Hi, > > > > That would

Re: 21/09/27 23:34:03 WARN YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

2021-09-27 Thread davvy benny
How can I solve the problem? On 2021/09/27 23:05:41, Thejdeep G wrote: > Hi, > > That would usually mean that the application has not been allocated the > executor resources from the resource manager yet. > > On 2021/09/27 21:37:30, davvy benny wrote: > > Hi > > I am trying to run spark

Re: 21/09/27 23:34:03 WARN YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

2021-09-27 Thread Thejdeep G
Hi, That would usually mean that the application has not been allocated the executor resources from the resource manager yet. On 2021/09/27 21:37:30, davvy benny wrote: > Hi > I am trying to run spark programmatically from eclipse with these > configurations for hadoop cluster locally >

21/09/27 23:34:03 WARN YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

2021-09-27 Thread davvy benny
Hi I am trying to run spark programmatically from eclipse with these configurations for hadoop cluster locally SparkConf sparkConf = new SparkConf().setAppName("simpleTest2").setMaster("yarn") .set("spark.executor.memory", "1g")

Re: Spark DStream application memory leak debugging

2021-09-27 Thread Sean Owen
This isn't specific to Spark, just use any standard java approach, for example: https://dzone.com/articles/how-to-capture-java-heap-dumps-7-options You need the JDK installed to use jmap On Mon, Sep 27, 2021 at 1:41 PM Kiran Biswal wrote: > Thanks Sean. > > When executors has only 2gb,

Re: Spark DStream application memory leak debugging

2021-09-27 Thread Kiran Biswal
Thanks Sean. When executors has only 2gb, executors restarted every 2/3 hours with OOMkilled errors When I increased executir memory to 12 GB and number of cores to 12 (2 executors, 6 cores per executor), the OOMKilled is stopped and restart happens but the meory usage peaks to 14GB after few