Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/6314#issuecomment-104688782
@yhuai We headed to the wrong direction at first. It's not because
`executionHive` can't find proper PostgreSQL configurations. The reason of the
failure is that `metadataHive` only overrides the metastore location without
overriding other JDO and Datanucleus properties, thus those properties are read
from `hive-site.xml` and talking about PostgreSQL. This makes `metadataHive`
trying to connect to the temporary Derby metastore with PostgreSQL settings,
and causes the error. What @WangTaoTheTonic did is overrides all related
properties with Hive default values (gathered from `ConfVars`). That's why
updating `executionHive` related code path corrects the behavior.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]