, "cluster")
> > > > > > .set("spark.yarn.stagingDir",
> > > > "hdfs://localhost:9000/user/hadoop/")
> > > > > > .set("spark.shuffle.service.enabled", "false&
> > > .set("spark.executor.cores","1")//
> > > >
> > .set("spark.yarn.nodemanager.resource.cpu-vcores","4")
> > > > .set("spark.yarn.submit.file.replication",
&
t; .set("spark.executor.memory","500m") //
> > > .set("spark.executor.cores","1")//
> > >
> .set("spark.yarn.nodemanager.resource.cpu-vcores","4")
> > >
.cpu-vcores","4")
> > .set("spark.yarn.submit.file.replication", "1")
> > .set("spark.yarn.jars",
> > "hdfs://localhost:9000/user/hadoop/davben/jars/*.jar")
> >
.set("spark.executor.cores","1")//
>
> .set("spark.yarn.nodemanager.resource.cpu-vcores","4")
> .set("spark.yarn.submit.file.replication", "1")
>
cation", "1")
.set("spark.yarn.jars",
"hdfs://localhost:9000/user/hadoop/davben/jars/*.jar")
When I check on the http://localhost:8088/cluster/apps/RUNNING I can see that
my job is submitted but y terminal lo
JG Perrin <jper...@lumeris.com> wrote:
>
>> Hi,
>>
>>
>>
>> I get the infamous:
>>
>> Initial job has not accepted any resources; check your cluster UI to
>> ensure that workers are registered and have sufficient resources
>>
&g
ugging Initial job has not accepted any resources; check your
cluster UI to ensure that workers are registered and have sufficient resources
I would check the queue you are submitting job, assuming it is yarn...
On Tue, Sep 26, 2017 at 11:40 PM, JG Perrin
<jper...@lumeris.com<mailto:jper...
I would check the queue you are submitting job, assuming it is yarn...
On Tue, Sep 26, 2017 at 11:40 PM, JG Perrin <jper...@lumeris.com> wrote:
> Hi,
>
>
>
> I get the infamous:
>
> Initial job has not accepted any resources; check your cluster UI to
> ensure that w
Hi,
I get the infamous:
Initial job has not accepted any resources; check your cluster UI to ensure
that workers are registered and have sufficient resources
I run the app via Eclipse, connecting:
SparkSession spark = SparkSession.builder()
.appName("Conv
, Jean Georges Perrin <j...@jgp.net> wrote:
> Hi,
>
> I am trying to connect to a new cluster I just set up.
>
> And I get...
> [Timer-0:WARN] Logging$class: Initial job has not accepted any resources;
> check your cluster UI to ensure that workers are registered and
Hi,
I am trying to connect to a new cluster I just set up.
And I get...
[Timer-0:WARN] Logging$class: Initial job has not accepted any resources; check
your cluster UI to ensure that workers are registered and have sufficient
resources
I must have forgotten something really super obvious.
My
er-0] WARN org.apache.spark.scheduler.TaskSchedulerImpl -
Initial job has not accepted any resources; check your cluster UI to ensure
that workers are registered and have sufficient resources.
object SparkPi {
val sparkConf = new SparkConf()
.setAppName("Spark Pi")
.setMaster("spark://10.100.103.25:7077&
> When Initial jobs have not accepted any resources then what all can be
> wrong? Going through stackoverflow and various blogs does not help. Maybe
> need better logging for this? Adding dev
>
Did you take a look at the spark UI to see your resource availability?
Thanks and Regards
Noorul
Hi All,
need your advice:
we see in some very rare cases following error in log
Initial job has not accepted any resources; check your cluster UI to ensure
that workers are registered and have sufficient resources
and in spark UI there are idle workers and application in WAITING state
in json
4:34 WARN DomainSocketFactory: The short-circuit local reads
feature cannot be used because UNIX Domain sockets are not available on
Windows.
16/08/17 01:04:52 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient resour
nning, but each task
sends the warning : "Initial job has not accepted any resources; check your
cluster UI to ensure that workers are registered and have sufficient
resources". At this time, I see mesos all cpus are used on node1:5050 and
running forever until I kiil a task.
My quest
master URL through spark submit command.
Thnx
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/spark-submit-hive-connection-through-spark-Initial-job-has-not-accepted-any-resources-tp24993p27074.html
Sent from the Apache Spark User List mailing list archive
below
15/12/16 10:22:01 WARN cluster.YarnScheduler: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and have
sufficient resources
15/12/16 10:22:04 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint:
ApplicationMaster has disassociated
>>> *15/12/16 10:22:01 WARN cluster.YarnScheduler: Initial job has not
accepted any resources; check your cluster UI to ensure that workers are
registered and have sufficient resources*
That means you don't have resources for your application, please check your
hadoop web ui.
On We
;>>>> count is
> "+df.count());
> }
> }
>
> command to submit job ./spark-submit --master spark://masterIp:7077
> --deploy-mode client --class com.ceg.spark.hive.sparkhive.SparkHiveInsertor
> --executor-cores 2 --executor-memory 1gb
> /home/someuser/Desk
gt;>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
count is
"+df.count());
}
}
command to submit job ./spark-submit --master spark://masterIp:7077
--deploy-mode client --class com.ceg.spark.hive
Hi,
I am able to fetch data, create table, put data from spark shell (scala
command line) from spark to hive
but when I create java code to do same and submitting it through
spark-submit i am getting *"Initial job has not accepted any resources;
check your cluster UI to ensure that wo
com>
>>> wrote:
>>>
>>>> What pool is the spark shell being put into? (You can see this through
>>>> the YARN UI under scheduler)
>>>>
>>>> Are you certain you're starting spark-shell up on YARN? By default it
>>>> uses a local sp
a local spark executor, so if it "just works" then it's because it's
>>> not using dynamic allocation.
>>>
>>>
>>> On Wed, Sep 23, 2015 at 18:04 Jonathan Kelly <jonathaka...@gmail.com>
>>> wrote:
>>>
>>>> I'm running into a proble
ing dynamic allocation.
>
>
> On Wed, Sep 23, 2015 at 18:04 Jonathan Kelly <jonathaka...@gmail.com>
> wrote:
>
>> I'm running into a problem with YARN dynamicAllocation on Spark 1.5.0
>> after using it successfully on an identically configured cluster with Spark
>>
d, Sep 23, 2015 at 18:04 Jonathan Kelly <jonathaka...@gmail.com>
>> wrote:
>>
>>> I'm running into a problem with YARN dynamicAllocation on Spark 1.5.0
>>> after using it successfully on an identically configured cluster with Spark
>>> 1.4.1.
>>
I'm running into a problem with YARN dynamicAllocation on Spark 1.5.0 after
using it successfully on an identically configured cluster with Spark 1.4.1.
I'm getting the dreaded warning "YarnClusterScheduler: Initial job has not
accepted any resources; check your cluster UI to ensure that wo
Hi,
I'm running Spark Standalone on a single node with 16 cores. Master and 4
workers are running.
I'm trying to submit two applications via spark-submit and am getting the
following error when submitting the second one: Initial job has not
accepted any resources; check your cluster UI to ensure
spark.cores.max themselves. Set this lower on a shared cluster to prevent
users from grabbing
the whole cluster by default.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/The-Initial-job-has-not-accepted-any-resources-error-can-t-seem-to-set-tp23398p23399.html
Sent from
TaskSchedulerImpl: Initial job has not accepted any resources; check
your cluster UI to ensure that workers are registered and have sufficient
memory
Ultimately the job runs successfully in most cases, but i feel like this
error has a significant effect in the overall execution time of the job
which i try
at 6:02 PM, Grzegorz Dubicki
grzegorz.dubi...@gmail.com wrote:
Hi mehrdad,
I seem to have the same issue as you wrote about here. Did you manage to
resolve it?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/running-a-job-on-ec2-Initial-job-has
Hi mehrdad,
I seem to have the same issue as you wrote about here. Did you manage to
resolve it?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/running-a-job-on-ec2-Initial-job-has-not-accepted-any-resources-tp20607p21218.html
Sent from the Apache Spark
Hi,
When i trying to execute the program from my laptop by connecting to HDP
environment (on which Spark also configured), i'm getting the warning
(Initial job has not accepted any resources; check your cluster UI to
ensure that workers are registered and have sufficient memory) and Job is
being
, vdiwakar.malladi
vdiwakar.mall...@gmail.com wrote:
Hi,
When i trying to execute the program from my laptop by connecting to HDP
environment (on which Spark also configured), i'm getting the warning
(Initial job has not accepted any resources; check your cluster UI to
ensure that workers
the program from my laptop by connecting to HDP
environment (on which Spark also configured), i'm getting the warning
(Initial job has not accepted any resources; check your cluster UI to
ensure that workers are registered and have sufficient memory) and Job is
being terminated. My console has
-spark-user-list.1001560.n3.nabble.com/issue-while-running-the-code-in-standalone-mode-Initial-job-has-not-accepted-any-resources-check-you-tp19628p19637.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
was submitted succesfully
and running on the cluster. But on console, it showed repeatedly that:
14/11/18 15:11:48 WARN YarnClientClusterScheduler: Initial job has not
accepted any resources; check your cluster UI to ensure that workers are
registered and have sufficient memory
Checked the cluster
-SNAPSHOT-hadoop2.0.0-mr1-cdh4.2.1.jarThe queue
`dt_spark` was free, and the program was submitted succesfully and running on
the cluster. But on console, it showed repeatedly that:
14/11/18 15:11:48 WARN YarnClientClusterScheduler: Initial job has not accepted
any resources; check your cluster UI
on host2, but I get this:
14/10/14 21:54:23 WARN TaskSchedulerImpl: Initial job has not accepted
any resources; check your cluster UI to ensure that workers are
registered and have sufficient memory
And it repeats again and again.
How can I fix this?
Best Regards
Theo
: Initial job has not accepted
any resources; check your cluster UI to ensure that workers are
registered and have sufficient memory
And it repeats again and again.
How can I fix this?
Best Regards
Theo
at console:12)
14/08/07 17:15:18 INFO TaskSchedulerImpl: Adding task set 0.0 with 38 tasks
14/08/07 17:15:33 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient memory
14/08/07 17:15:48 WARN
, and calling
rdd.count(); but Spark never managed to complete the job, giving message
like the following: WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient memory
I run a standalone Spark Cluster with 1
to pool default
2014-07-25 01:25:09,616 [Thread-2] DEBUG
falkonry.commons.service.ServiceHandler - Listening...
61847 [Timer-0] WARN org.apache.spark.scheduler.TaskSchedulerImpl -
Initial job has not accepted any resources; check your cluster UI to
ensure that workers are registered and have sufficient
this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Initial-job-has-not-accepted-any-resources-but-workers-are-in-UI-tp10659p10671.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
solution: opened all ports on the ec2 machine that the driver was running on.
need to narrow down what ports akka wants... but the issue is solved.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Initial-job-has-not-accepted-any-resources-but-workers
It seems like the Initial job has not accepted any resources; shows
up for a wide variety of different errors (for example the obvious one
where you've requested more memory than is available) but also for
example in the case where the worker nodes does not have the
appropriate code on their class
in context:
http://apache-spark-user-list.1001560.n3.nabble.com/TaskSchedulerImpl-Initial-job-has-not-accepted-any-resources-check-your-cluster-UI-to-ensure-that-woy-tp8247p8444.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
/TaskSchedulerImpl-Initial-job-has-not-accepted-any-resources-check-your-cluster-UI-to-ensure-that-woy-tp8247p8444.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
shows repeatedly:
14/06/25 04:46:29 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient memory
Looks like its either a bug or misinformation. Can someone confirm this so I
can submit a JIRA?
--
View
-Initial-job-has-not-accepted-any-resources-check-your-cluster-UI-to-ensure-that-woy-tp8247p8285.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
appreciated.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Initial-job-has-not-accepted-any-resources-tp5322.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
.1001560.n3.nabble.com/Initial-job-has-not-accepted-any-resources-tp5322p5335.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
message will be added to the discussion below:
http://apache-spark-user-list.1001560.n3.nabble.com/Initial-job-has-not-accepted-any-resources-tp5322p5335.html
To unsubscribe from Initial job has not accepted any resources, click here.
NAML
--
View this message in context:
http://apache-spark-user
the 1.0 branch. Maybe this makes my problem(s) worse,
but am going to give it a try. Rapidly running out of time to get our code
fully working on EC2.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Initial-job-has-not-accepted-any-resources-tp5322p5344.html
Sent
job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient memory
14/04/11 21:30:02 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient
hangs at the step reduceByKey and prints the Warning
14/04/11 21:29:47 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient memory
14/04/11 21:30:02 WARN TaskSchedulerImpl: Initial job has not accepted
hangs at the step reduceByKey and prints the Warning
14/04/11 21:29:47 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient memory
14/04/11 21:30:02 WARN TaskSchedulerImpl: Initial job has not accepted
58 matches
Mail list logo