Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/13737
@MLnick @rxin yah so after poking at it a bit today I don't see a good way
to disentagle this - we presumably want to make sure that PySpark works well
with a hive based Spark Session (even if the tests weren't testing hive
specific functionality).
If we would rather fix it by disabling the loading of tables from Python -
I did make some changes so that we could disable loading the test tables /
required files for Python based tests (which might be good sort of regardless
since loading the files presumably takes some time and the Scala test tables
aren't used in the Python tests).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]