FatalLin commented on pull request #32202: URL: https://github.com/apache/spark/pull/32202#issuecomment-846711680
> I found the same problem with partition Hive tables if they contain subdirectories, so why wasn't it changed in this action? you mean it will hit the same problem if we trigger the action with hive engine instead of spark native reader? I thought it could be handled with the hive configuration such like "hive.mapred.supports.subdirectories". -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
