cloud-fan commented on issue #25556: [SPARK-28853][SQL] Support conf to 
organize file partitions by file path
URL: https://github.com/apache/spark/pull/25556#issuecomment-524758506
 
 
   > When we read those small files back again, we want those files organized 
by partition(in my case is day) in FileRDD, so when write it back to the table, 
each RDD partition will write by a task and it will only write out in one files 
by dynamic write interface. So finally we merge the small files.
   
   Can't we do a `partitionBy` before writing a DataFrame?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to