FatalLin edited a comment on pull request #32202:
URL: https://github.com/apache/spark/pull/32202#issuecomment-822567233


   > > > But in spark, the operation is only happened in spark-sql, so I only 
check hive-side configuration "hive.mapred.supports.subdirectories" earlier. 
How do you think? @attilapiros
   > > 
   > > 
   > > The original intention of this PR is to be compatible with Hive so I 
would check both configs as on the same machine I would expect to get the same 
answers when querying the non partitioned table with subdirectories.
   > 
   > got it, I'll check both configs, thanks!
   
   After a consideration( include studying PR from other dev and rethinking 
point 4 @attilapiros  mentioned above), I decided to add a new config to 
replace the configs from hive we mentioned earlier, but I'm not sure is the 
config name is proper enough(maybe too long I guess). Like always, any feedback 
is appreciated!
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to