May i know is spark.sql.shuffle.partitions=auto only available on Databricks? 
what about on vanilla Spark ? When i set this, it gives error need to put int.  
Any open source library that auto find the best partition , block size for 
dataframe?

Reply via email to