[
https://issues.apache.org/jira/browse/SPARK-20202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953304#comment-15953304
]
Sean Owen commented on SPARK-20202:
-----------------------------------
Agree. I think the logic was that Spark had released its own source/binary
version of Hive, and then used that in Spark. I don't think anybody believes
that's a good solution in the long term; it was a work-around for hive-exec's
packaging IIRC. Once whatever that is is resolved this can go away, but I defer
to those who know the issue better on the details.
What I'm not clear on is whether the current org.spark-hive situation is
streeetching the source/binary policy so far that it breaks, enough that no
more releases can happen without it. Best to make it go away ASAP anyway. But I
don't know if changes in Hive 2.5 help integration with Hive 1.x. It may
require either temporarily blessing the fork, or more jar surgery to un-uberize
the hive-exec jar or something.
> Remove references to org.spark-project.hive
> -------------------------------------------
>
> Key: SPARK-20202
> URL: https://issues.apache.org/jira/browse/SPARK-20202
> Project: Spark
> Issue Type: Bug
> Components: Build, SQL
> Affects Versions: 1.6.4, 2.0.3, 2.1.1
> Reporter: Owen O'Malley
> Priority: Critical
>
> Spark can't continue to depend on their fork of Hive and must move to
> standard Hive versions.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]