Thank you For the answer
I have set now these properties as you suggest
SparkConf sparkConf = new
SparkConf().setAppName("simpleTest2").setMaster("yarn")
.set("spark.executor.memory", "1g")
.set("deploy.mode",
How can I check it?
On 2021/09/28 03:29:45, Stelios Philippou wrote:
> It might be possible that you do not have the resources on the cluster. So
> your job will remain to wait for them as they cannot be provided.
>
> On Tue, 28 Sep 2021, 04:26 davvy benny, wrote:
>
> > How can I solve the
It might be possible that you do not have the resources on the cluster. So
your job will remain to wait for them as they cannot be provided.
On Tue, 28 Sep 2021, 04:26 davvy benny, wrote:
> How can I solve the problem?
>
> On 2021/09/27 23:05:41, Thejdeep G wrote:
> > Hi,
> >
> > That would
How can I solve the problem?
On 2021/09/27 23:05:41, Thejdeep G wrote:
> Hi,
>
> That would usually mean that the application has not been allocated the
> executor resources from the resource manager yet.
>
> On 2021/09/27 21:37:30, davvy benny wrote:
> > Hi
> > I am trying to run spark
Hi,
That would usually mean that the application has not been allocated the
executor resources from the resource manager yet.
On 2021/09/27 21:37:30, davvy benny wrote:
> Hi
> I am trying to run spark programmatically from eclipse with these
> configurations for hadoop cluster locally
>
Hi
I am trying to run spark programmatically from eclipse with these
configurations for hadoop cluster locally
SparkConf sparkConf = new
SparkConf().setAppName("simpleTest2").setMaster("yarn")
.set("spark.executor.memory", "1g")
@Ayan
It seems to be running on spark standalone. Not mostly on Yarn I guess.
Thanks,
Sathish
On Tue, Sep 26, 2017 at 9:09 PM, ayan guha wrote:
> I would check the queue you are submitting job, assuming it is yarn...
>
> On Tue, Sep 26, 2017 at 11:40 PM, JG Perrin
not using Yarn, just standalone cluster with 2 nodes here (physical, not even
VM). network seems good between the nodes .
From: ayan guha [mailto:guha.a...@gmail.com]
Sent: Tuesday, September 26, 2017 10:39 AM
To: JG Perrin
Cc: user@spark.apache.org
Subject: Re: Debugging
I would check the queue you are submitting job, assuming it is yarn...
On Tue, Sep 26, 2017 at 11:40 PM, JG Perrin wrote:
> Hi,
>
>
>
> I get the infamous:
>
> Initial job has not accepted any resources; check your cluster UI to
> ensure that workers are registered and have
Hi,
I get the infamous:
Initial job has not accepted any resources; check your cluster UI to ensure
that workers are registered and have sufficient resources
I run the app via Eclipse, connecting:
SparkSession spark = SparkSession.builder()
.appName("Converter -
Hi Jean,
What does the master UI say? http://10.0.100.81:8080
Do you have enough resources availalbe, or is there any running context
that is depleting all your resources ?
Are your workers registered and alive ? How much memory each? How many
cores each ?
Best
On Mon, Sep 18, 2017 at 11:24
Hi,
I am trying to connect to a new cluster I just set up.
And I get...
[Timer-0:WARN] Logging$class: Initial job has not accepted any resources; check
your cluster UI to ensure that workers are registered and have sufficient
resources
I must have forgotten something really super obvious.
My
> When Initial jobs have not accepted any resources then what all can be
> wrong? Going through stackoverflow and various blogs does not help. Maybe
> need better logging for this? Adding dev
>
Did you take a look at the spark UI to see your resource availability?
Thanks and Regards
Noorul
Hi all
I run spark on mesos cluster, and meet a problem : when I send 6 spark
drivers *at the same time*, I can get the Information on node3:8081 that
there are 4 drivers in "Launched Drivers" and 2 in "Queueed Drivers". On
mesos:5050, I can see there are 4 active tasks are running, but each task
Yesterday night, I run the jar on my pseudo-distributed mode without WARN and
ERROR. However, Today, Getting the WARN and directly leading to the ERROR
below. My computer memory is 8GB and I think it’s not the issue as the LOG WARN
describe. What ‘s wrong ? The code haven’t change yet. And the
>>> *15/12/16 10:22:01 WARN cluster.YarnScheduler: Initial job has not
accepted any resources; check your cluster UI to ensure that workers are
registered and have sufficient resources*
That means you don't have resources for your application, please check your
hadoop web ui.
On Wed, Dec 16,
16 matches
Mail list logo