Actually a non trivial fraction of users / customers I interact with still use 
very old Hive metastores. Because it’s very difficult to upgrade Hive metastore 
wholesale (it’d require all the production jobs that access the same metastore 
be upgraded at once). This is even harder than JVM upgrade which can be done on 
a per job basis, or OS upgrade that can be done on a per machine basis.

Is there high maintenance cost with keeping these? My understanding is that 
Michael did a good job initially with classloader isolation and modular design 
that they are very easy to maintain.

> On Jan 22, 2019, at 11:13 PM, Hyukjin Kwon <gurwls...@gmail.com> wrote:
> 
> Yea, I was thinking about that too. They are too old to keep. +1 for removing 
> them out.
> 
> 2019년 1월 23일 (수) 오전 11:30, Dongjoon Hyun <dongjoon.h...@gmail.com>님이 작성:
>> Hi, All.
>> 
>> Currently, Apache Spark supports Hive Metastore(HMS) 0.12 ~ 2.3.
>> Among them, HMS 0.x releases look very old since we are in 2019.
>> If these are not used in the production any more, can we drop HMS 0.x 
>> supports in 3.0.0?
>> 
>> hive-0.12.0                     2013-10-10
>> hive-0.13.0                     2014-04-15
>> hive-0.13.1                     2014-11-16
>> hive-0.14.0                     2014-11-16
>> ( https://archive.apache.org/dist/hive/ )
>> 
>> In addition, if there is someone who is still using these HMS versions and 
>> has a plan to install and use Spark 3.0.0 with these HMS versions, could you 
>> reply this email thread? If there is a reason, that would be very helpful 
>> for me.
>> 
>> Thanks,
>> Dongjoon.

Reply via email to