When I use .limit() , the number of partitions for the returning dataframe is 1 
which normally fails most jobs.


val df = spark.sql("select * from table limit n")
df.write.parquet(....)




Thanks!





Reply via email to