Oh, you shouldn’t use spark-class for your own classes. Just build your job 
separately and submit it by running it with “java” and creating a SparkContext 
in it. spark-class is designed to run classes internal to the Spark project.

Matei

On Jan 8, 2014, at 8:06 PM, Aureliano Buendia <[email protected]> wrote:

> 
> 
> 
> On Thu, Jan 9, 2014 at 3:59 AM, Matei Zaharia <[email protected]> wrote:
> Have you looked at the cluster UI, and do you see any workers registered 
> there, and your application under running applications? Maybe you typed in 
> the wrong master URL or something like that.
> 
> No, it's automated: cat spark-ec2/cluster-url
> 
> I think the problem might be caused by spark-class script. It seems to assign 
> too much memory.
> 
> I forgot the fact that run-example doesn't use spark-class.
>  
> 
> Matei
> 
> On Jan 8, 2014, at 7:07 PM, Aureliano Buendia <[email protected]> wrote:
> 
>> The strange thing is that spark examples work fine, but when I include a 
>> spark example in my jar and deploy it, I get this error for the very same 
>> example:
>> 
>> WARN ClusterScheduler: Initial job has not accepted any resources; check 
>> your cluster UI to ensure that workers are registered and have sufficient 
>> memory
>> 
>> My jar is deployed to master and then to workers by spark-ec2/copy-dir. Why 
>> would including the example in my jar cause this error?
>> 
>> 
>> 
>> On Thu, Jan 9, 2014 at 12:41 AM, Aureliano Buendia <[email protected]> 
>> wrote:
>> Could someone explain how SPARK_MEM, SPARK_WORKER_MEMORY and 
>> spark.executor.memory should be related so that this non helpful error 
>> doesn't occur?
>> 
>> Maybe there are more env and java config variable about memory that I'm 
>> missing.
>> 
>> By the way, that bit of the error asking to check the web UI, it's just 
>> redundant. The UI is of no help.
>> 
>> 
>> On Wed, Jan 8, 2014 at 4:31 PM, Aureliano Buendia <[email protected]> 
>> wrote:
>> Hi,
>> 
>> 
>> My spark cluster is not able to run a job due to this warning:
>> 
>> WARN ClusterScheduler: Initial job has not accepted any resources; check 
>> your cluster UI to ensure that workers are registered and have sufficient 
>> memory
>> 
>> The workers have these status:
>> 
>> ALIVE         2 (0 Used)     6.3 GB (0.0 B Used)
>> So there must be plenty of memory available despite the warning message. I'm 
>> using default spark config, is there a config parameter that needs changing 
>> for this to work?
>> 
>> 
> 
> 

Reply via email to