CHENXCHEN opened a new pull request, #36070:
URL: https://github.com/apache/spark/pull/36070

   ### What changes were proposed in this pull request?
   When we use a partition table, if the filesystem of a partition is different 
from the filesystem of the table location,
   we will get an exception:`java.lang.IllegalArgumentException: Wrong FS: 
s3a://path/to/spark3_snap/dt=2020-09-10, expected: hdfs://cluster`,
   because `HadoopMapReduceCommitProtocol` will use the filesystem of the table 
location to operate the file
   For example, the following SQL will cause the above exception:
   ```sql
   CREATE TABLE `spark3_snap`( `id` string) PARTITIONED BY (`dt` string)
   STORED AS ORC LOCATION 'hdfs://path/to/spark3_snap';
   
   -- The file system of the partition location is different from the 
filesystem of the table location,
   -- one is S3A, the other is HDFS
   alter table tmp.spark3_snap add partition (dt='2020-09-10') 
   LOCATION 's3a://path/to/spark3_snap/dt=2020-09-10';
   
   insert overwrite table tmp.spark3_snap partition(dt)
   select '10' id, '2020-09-09' dt
   union
   select '20' id, '2020-09-10' dt
   ;
   ```
   
   ### Why are the changes needed?
   We cannot operate on partitions with different from filesystem of table 
partition location
   
   ### Does this PR introduce _any_ user-facing change?
   Yes, before this PR, an exception will be reported when the user operates a 
filesystem of partition location different from the filesystem of table 
location. After this PR, it will be processed as needed.
   
   
   ### How was this patch tested?
   Manual testing
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to