Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/12204#issuecomment-214379244
Because there is no explanation about the reason you talked in the code
comment (there is just a TODO), I am not sure about that.
Simply to say, when we want to persist partitioned data source, we need to
prepare partition metadata and update to Hive metastore. As we have better
catalog support and API now, it is not difficult to do that. You can see that
I prepare partition metadata with partition directory locations and partition
values then update to Hive metastore.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]