How can I check it?

On 2021/09/28 03:29:45, Stelios Philippou <stevo...@gmail.com> wrote: 
> It might be possible that you do not have the resources on the cluster. So
> your job will remain to wait for them as they cannot be provided.
> 
> On Tue, 28 Sep 2021, 04:26 davvy benny, <davv...@gmail.com> wrote:
> 
> > How can I solve the problem?
> >
> > On 2021/09/27 23:05:41, Thejdeep G <tejde...@gmail.com> wrote:
> > > Hi,
> > >
> > > That would usually mean that the application has not been allocated the
> > executor resources from the resource manager yet.
> > >
> > > On 2021/09/27 21:37:30, davvy benny <davv...@gmail.com> wrote:
> > > > Hi
> > > > I am trying to run spark programmatically from eclipse with these
> > configurations for hadoop cluster locally
> > > >     SparkConf sparkConf = new
> > SparkConf().setAppName("simpleTest2").setMaster("yarn")
> > > >                             .set("spark.executor.memory", "1g")
> > > >                             .set("deploy.mode", "cluster")
> > > >                             .set("spark.yarn.stagingDir",
> > "hdfs://localhost:9000/user/hadoop/")
> > > >                     .set("spark.shuffle.service.enabled", "false")
> > > >                     .set("spark.dynamicAllocation.enabled", "false")
> > > >                     .set("spark.cores.max", "1")
> > > >                     .set("spark.executor.instances","2")
> > > >                     .set("spark.executor.memory","500m") //
> > > >                     .set("spark.executor.cores","1")//
> > > >
> >  .set("spark.yarn.nodemanager.resource.cpu-vcores","4")
> > > >                             .set("spark.yarn.submit.file.replication",
> > "1")
> > > >                             .set("spark.yarn.jars",
> > "hdfs://localhost:9000/user/hadoop/davben/jars/*.jar")
> > > >
> > > > When I check on the http://localhost:8088/cluster/apps/RUNNING I can
> > see that my job is submitted but y terminal loops saying
> > > > 21/09/27 23:36:33 WARN YarnScheduler: Initial job has not accepted any
> > resources; check your cluster UI to ensure that workers are registered and
> > have sufficient resources
> > > >
> > > > I ve noticed that this occurs after the application of a map on my
> > Dataset.
> > > >
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> > > >
> > > >
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> > >
> > >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> >
> >
> 

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to