Hey Dongjoon and Felix,

I totally agree that Hive 2.3 is more stable than Hive 1.2. Otherwise, we
wouldn't even consider integrating with Hive 2.3 in Spark 3.0.

However, *"Hive" and "Hive integration in Spark" are two quite different
things*, and I don't think anybody has ever mentioned "the forked Hive
1.2.1 is stable" in any recent Hadoop/Hive version discussions (at least I
double-checked all my replies).

What I really care about is the stability and quality of "Hive integration
in Spark", which have gone through some major updates due to the recent
Hive 2.3 upgrade in Spark 3.0. We had already found bugs in this piece, and
empirically, for a significant upgrade like this one, it is not surprising
that other bugs/regressions can be found in the near future. On the other
hand, the Hive 1.2 integration code path in Spark has been battle-tested
for years. Yes, there are issues, but people have learned how to get along
with these issues. And please don't forget that, for Spark 3.0 end-users
who really don't want to interact with this Hive 1.2 fork, they can always
use Hive 2.3 at their own risks.

True, "stable" is quite vague a criterion, and hard to be proven. But that
is exactly the reason why we may want to be conservative and wait for some
time and see whether there are further signals suggesting that the Hive 2.3
integration in Spark 3.0 is *unstable*. After one or two Spark 3.x minor
releases, if we've fixed all the outstanding issues and no more significant
ones are showing up, we can declare that the Hive 2.3 integration in Spark
3.x is stable, and then we can consider removing reference to the Hive 1.2
fork. Does that make sense?

Cheng

On Wed, Nov 20, 2019 at 11:49 AM Felix Cheung <felixcheun...@hotmail.com>
wrote:

> Just to add - hive 1.2 fork is definitely not more stable. We know of a
> few critical bug fixes that we cherry picked into a fork of that fork to
> maintain ourselves.
>
>
> ------------------------------
> *From:* Dongjoon Hyun <dongjoon.h...@gmail.com>
> *Sent:* Wednesday, November 20, 2019 11:07:47 AM
> *To:* Sean Owen <sro...@gmail.com>
> *Cc:* dev <dev@spark.apache.org>
> *Subject:* Re: The Myth: the forked Hive 1.2.1 is stabler than XXX
>
> Thanks. That will be a giant step forward, Sean!
>
> > I'd prefer making it the default in the POM for 3.0.
>
> Bests,
> Dongjoon.
>
> On Wed, Nov 20, 2019 at 11:02 AM Sean Owen <sro...@gmail.com> wrote:
>
> Yeah 'stable' is ambiguous. It's old and buggy, but at least it's the
> same old and buggy that's been there a while. "stable" in that sense
> I'm sure there is a lot more delta between Hive 1 and 2 in terms of
> bug fixes that are important; the question isn't just 1.x releases.
>
> What I don't know is how much affects Spark, as it's a Hive client
> mostly. Clearly some do.
>
> I'd prefer making it the default in the POM for 3.0. Mostly on the
> grounds that its effects are on deployed clusters, not apps. And
> deployers can still choose a binary distro with 1.x or make the choice
> they want. Those that don't care should probably be nudged to 2.x.
> Spark 3.x is already full of behavior changes and 'unstable', so I
> think this is minor relative to the overall risk question.
>
> On Wed, Nov 20, 2019 at 12:53 PM Dongjoon Hyun <dongjoon.h...@gmail.com>
> wrote:
> >
> > Hi, All.
> >
> > I'm sending this email because it's important to discuss this topic
> narrowly
> > and make a clear conclusion.
> >
> > `The forked Hive 1.2.1 is stable`? It sounds like a myth we created
> > by ignoring the existing bugs. If you want to say the forked Hive 1.2.1
> is
> > stabler than XXX, please give us the evidence. Then, we can fix it.
> > Otherwise, let's stop making `The forked Hive 1.2.1` invincible.
> >
> > Historically, the following forked Hive 1.2.1 has never been stable.
> > It's just frozen. Since the forked Hive is out of our control, we
> ignored bugs.
> > That's all. The reality is a way far from the stable status.
> >
> >     https://mvnrepository.com/artifact/org.spark-project.hive/
> >
> https://mvnrepository.com/artifact/org.spark-project.hive/hive-exec/1.2.1.spark
> (2015 August)
> >
> https://mvnrepository.com/artifact/org.spark-project.hive/hive-exec/1.2.1.spark2
> (2016 April)
> >
> > First, let's begin Hive itself by comparing with Apache Hive 1.2.2 and
> 1.2.3,
> >
> >     Apache Hive 1.2.2 has 50 bug fixes.
> >     Apache Hive 1.2.3 has 9 bug fixes.
> >
> > I will not cover all of them, but Apache Hive community also backports
> > important patches like Apache Spark community.
> >
> > Second, let's move to SPARK issues because we aren't exposed to all Hive
> issues.
> >
> >     SPARK-19109 ORC metadata section can sometimes exceed protobuf
> message size limit
> >     SPARK-22267 Spark SQL incorrectly reads ORC file when column order
> is different
> >
> > These were reported since Apache Spark 1.6.x because the forked Hive
> doesn't have
> > a proper upstream patch like HIVE-11592 (fixed at Apache Hive 1.3.0).
> >
> > Since we couldn't update the frozen forked Hive, we added Apache ORC
> dependency
> > at SPARK-20682 (2.3.0), added a switching configuration at SPARK-20728
> (2.3.0),
> > tured on `spark.sql.hive.convertMetastoreOrc by default` at SPARK-22279
> (2.4.0).
> > However, if you turn off the switch and start to use the forked hive,
> > you will be exposed to the buggy forked Hive 1.2.1 again.
> >
> > Third, let's talk about the new features like Hadoop 3 and JDK11.
> > No one believe that the ancient forked Hive 1.2.1 will work with this.
> > I saw that the following issue is mentioned as an evidence of Hive 2.3.6
> bug.
> >
> >     SPARK-29245 ClassCastException during creating HiveMetaStoreClient
> >
> > Yes. I know that issue because I reported it and verified HIVE-21508.
> > It's fixed already and will be released ad Apache Hive 2.3.7.
> >
> > Can we imagine something like this in the forked Hive 1.2.1?
> > 'No'. There is no future on it. It's frozen.
> >
> > From now, I want to claim that the forked Hive 1.2.1 is the unstable one.
> > I welcome all your positive and negative opinions.
> > Please share your concerns and problems and fix them together.
> > Apache Spark is an open source project we shared.
> >
> > Bests,
> > Dongjoon.
> >
>
>

Reply via email to