Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/21588
Yea, that's all true. I admit what you and @jerryshao did makes sense in a
way. If we failed to replace the Hive fork to 2.3.x and keep the current fork,
I got it that's the last resort that might make sense.
It should have never been done like this with a fork. I personally see it's
what we Spark made the mistake about and what I \*personally\* thought was we
should do our own cleanup things, and Spark should mainly be responsible for it.
I tried hard to understand the contexts and find the most reasonable way
(to me) to get through this smoothly and reasonably.
I left a comment in
[SPARK-20202|https://issues.apache.org/jira/browse/SPARK-20202]. Should be the
most appropriate place to talk about this.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]