PySpark failing on a mid-sized broadcast

2015-11-30 Thread ameyc
with yarn.executor.memoryOverhead property doesnt seem to make much of a difference. Has anyone else come across this before? - Amey -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/PySpark-failing-on-a-mid-sized-broadcast-tp25520.html Sent from the Apache

Re: PySpark failing on a mid-sized broadcast

2015-11-30 Thread ameyc
BTW, my spark.python.worker.reuse setting is set to "true". -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/PySpark-failing-on-a-mid-sized-broadcast-tp25520p25521.html Sent from the Apache Spark User List mailing list archive at