I would check the queue you are submitting job, assuming it is yarn...

On Tue, Sep 26, 2017 at 11:40 PM, JG Perrin <jper...@lumeris.com> wrote:

> Hi,
>
>
>
> I get the infamous:
>
> Initial job has not accepted any resources; check your cluster UI to
> ensure that workers are registered and have sufficient resources
>
>
>
> I run the app via Eclipse, connecting:
>
>         SparkSession spark = SparkSession.*builder*()
>
>                 .appName("Converter - Benchmark")
>
>                 .master(ConfigurationManager.*getMaster*())
>
>                 .config("spark.cores.max", "4")
>
>                 .config("spark.executor.memory", "16g")
>
>                 .getOrCreate();
>
>
>
>
>
> Everything seems ok on the cluster side:
>
>
>
>
>
> I probably missed something super obvious, but can’t find it…
>
>
>
> Any help/hint is welcome! - TIA
>
>
>
> jg
>
>
>
>
>
>
>



-- 
Best Regards,
Ayan Guha

Reply via email to