Hi,
I am processing multiple 2 GB each csv files with my spark application. Which
also does union and aggregation across all the input files. Currently stuck
with given below error:
java.lang.StackOverflowError
at
Hi,
I am running application over spark v 1.6.2(in standalone mode) for over 100 GB
of data . Given below are my configurations:
Job configuration
spark.driver.memory=5g
spark.executor.memory=5g
spark.cores.max=4
spark-env.sh
export SPARK_WORKER_INSTANCES=3;
export SPARK_WORKER_MEMORY=5g;