Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/21588
Yup, it will still fail but it fixes everything else to make it working
with Hadoop 3 within Spark. I think the current change is minimised as the
current status as is and I meant to target make the tests going through without
other changes.
> If, when build with Hadoop 3, Spark will not support older versions of
Hive, that needs to be reflected in the code, not just in the tests.
Yup, but the error message was pretty readable to me, at least.
> And I'm also referring to the "org.spark-project" fork of Hive. That
hasn't been updated, right? Which means that if you run this PR here, it will
still fail?
It will fail but it fixes everything else that we need for Spark & Hadoop 3
within Spark. I thought it should be tested so I republished it under my
personal domain.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]