Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/21588
The main thing is that this change is changing test coverage based on the
Hadoop version. So that means that we're effectively changing supported
versions of Hive here, and we should do all the necessary changes to let people
know of that. That includes deciding whether disabling those tests is the right
thing, or whether we should make them work.
The error message you saw seems familiar to you but I'm pretty sure it
would be very cryptic to someone who's not familiar with this problem. (Why is
Hive complaining about a Hadoop version if I'm running Spark?)
The Hive 2.1 suite you're disabling is also pretty important to keep
working, since it tests behavior that changed from Spark's built in version of
Hive, and most probably is similar in newer versions.
We should be looking at what it means to support Hadoop 3, and answer that
before we go hacking and disabling things just to get tests to pass.
I also really don't see the point of this before we fix the Hive fork...
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]