turboFei commented on issue #25863: [WIP][SPARK-29037][CORE][SQL] For static 
partition overwrite, spark may give duplicate result.
URL: https://github.com/apache/spark/pull/25863#issuecomment-534016173
 
 
   > Can we think of a simple approach? e.g. use a fixed-path staging dir and 
fail the job if the staging dir already exists.
   
   @cloud-fan 
   We can transfer each specified partitionKey Value to a checkSum, which is 
fixed-width, such as CRC8.
   
   And name the staging dir 
`.spark-staging-${staticKVs.size}-CRC1-CRC2-...-CRCN`.
   
   For example, for the sql `insert overwrite table ta 
partition(p1=v1,p2=v2,p3) select ...`.
   We need to check the existence of:
   - .spark-staging-0
   - .spark-staging-1-CRC1
   - .spark-staging-2-CRC1-CRC2
   
   But we can not check the existence of `.spark-staging-3-CRC1-CRC2-CRC3`.
   
   Do you have suggestion? @advancedxy 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to