Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2616#issuecomment-57909183
So this bug can be triggered by lower versions of Hadoop, e.g. 1.0.3. I
haven't validate the exact range yet.
In `Hive.loadDynamicPartitions`, Hive calls
`o.a.h.h.q.e.Utilities.getFIleStatusRecurse` to glob the temporary directory
for data files, it seems that lower versions of Hadoop doesn't filter out files
like `_SUCCESS`, which causes the problem.
Within Hive, `loadDynamicPartitions` is only used in operations like
`LOAD`. At the end of a normal insertion to a dynamically partitioned table,
`FileSinkOperator` calls `Utilities.mvFileToFinalPath` to move the entire
temporary directory to target location, thus doesn't have this problem.
`Utilities.mvFileToFinalPath` is more efficient than
`Hive.loadDynamicPartitions` since it doesn't parses and validates partition
specs. But it requires some internal Hive data structures like
`DynamicPartitionCtx`. I'll try to see whether I can mock these data structures
and use `mvFileToFinalPath` instead.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]