triplesheep commented on issue #25556: [SPARK-28853][SQL] Support conf to 
organize file partitions by file path
URL: https://github.com/apache/spark/pull/25556#issuecomment-524560341
 
 
   @srowen  @cloud-fan  emmm, actually I don't assume that similarly-named 
paths tend to store in the same location. Here is the scene it for small file 
merge.
   1. Think about partition table: CREATE TABLE table_name PARTITIONED BY (day)
   2. When we write data into that table we use dynamic write interface and it 
will write out more than one files in one spark task. So it may results on a 
lot of small files.
   3. We want to merge those small files, so we read those data again and want 
to write it down to the table with less files.
   4. When we read those small files back again, we want those files organized 
by partition(in my case is day) in FileRDD, so when write it back to the table, 
each RDD partition will write by a task and it will only write out in one files 
by dynamic write interface. So finally we merge the small files.
   
   We can do this by read files again and use repartition but it was expensive 
so I post this issue.
   I don't concern about the similarly-named paths store where I focus on the 
similarly-named paths store on the same RDD partition when read data from files 
:)
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to