You can check org.apache.spark.sql.internal.SQLConf for other default settings
as well.
val SHUFFLE_PARTITIONS = SQLConfigBuilder("spark.sql.shuffle.partitions")
.doc("The default number of partitions to use when shuffling data for joins
or aggregations.")
.intConf
.createWithDefaul
You need to use `spark.sql.shuffle.partitions`.
// maropu
On Fri, May 20, 2016 at 8:17 PM, εδΉι <251922...@qq.com> wrote:
> Hi all.
> I set Spark.default.parallelism equals 20 in spark-default.conf. And send
> this file to all nodes.
> But I found reduce number is still default value,200.
> Does
Hi all.
I set Spark.default.parallelism equals 20 in spark-default.conf. And send this
file to all nodes.
But I found reduce number is still default value,200.
Does anyone else encouter this problem? can anyone give some advice?
[Stage 9:>