ting but I
couldn't find the configurations to change the behavior back to what it was
before.
Best regards,
*Babak Alipour ,*
*University of Florida*
column) is not that big, neither are the
original files. Any ideas?
>Babak
*Babak Alipour ,*
*University of Florida*
On Sun, Oct 2, 2016 at 1:45 AM, Vadim Semenov <vadim.seme...@datadoghq.com>
wrote:
> oh, and try to run even smaller executors, i.e. with
> `spark.executor.memo
)
at java.lang.Thread.run(Thread.java:745)
>Babak
*Babak Alipour ,*
*University of Florida*
On Sat, Oct 1, 2016 at 11:35 PM, Babak Alipour <babak.alip...@gmail.com>
wrote:
> Do you mean running a multi-JVM 'cluster' on the single machine? How would
> that affect performance/mem
>Babak
*Babak Alipour ,*
*University of Florida*
On Fri, Sep 30, 2016 at 3:03 PM, Vadim Semenov <vadim.seme...@datadoghq.com>
wrote:
> Run more smaller executors: change `spark.executor.memory` to 32g and
> `spark.executor.cores` to 2-4, for example.
>
> Changing driver'
)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I'm running spark in local mode so there is only one executor, the driver
and spark.driver.memory is set to 64g. Changing the driver's memory doesn't
help.
*Babak Alipour
re knowledge of Spark can shed some light on this.
Thank you!
*Best regards,*
*Babak Alipour ,*
*University of Florida*