Hi,

My spark cluster is not able to run a job due to this warning:

WARN ClusterScheduler: Initial job has not accepted any resources; check
your cluster UI to ensure that workers are registered and have sufficient
memory

The workers have these status:

ALIVE2 (0 Used)6.3 GB (0.0 B Used)So there must be plenty of memory
available despite the warning message. I'm using default spark config, is
there a config parameter that needs changing for this to work?

Reply via email to