I am running the following command on a Hadoop cluster to launch Spark shell
with DRA:
spark-shell --conf spark.dynamicAllocation.enabled=true --conf
spark.shuffle.service.enabled=true --conf
spark.dynamicAllocation.minExecutors=4 --conf
spark.dynamicAllocation.maxExecutors=12 --conf
spark.dy
I have the following script in a file named test.R:
library(SparkR)
sc <- sparkR.init(master="yarn-client")
sqlContext <- sparkRSQL.init(sc)
df <- createDataFrame(sqlContext, faithful)
showDF(df)
sparkR.stop()
q(save="no")
If I submit this with "sparkR test.R" or "RÂ CMD BATCH test.R" or "Rscript