Read the docs at the link that you pasted:
http://spark.apache.org/docs/latest/sql-programming-guide.html#interacting-with-different-versions-of-hive-metastore

Spark will always compile against the same version of Hive (1.2.1), but it
can dynamically load jars to speak to other versions.

On Fri, Mar 4, 2016 at 11:20 AM, Yong Zhang <java8...@hotmail.com> wrote:

> When I tried to compile the Spark 1.5.2 with -Phive-0.12.0, maven gave me
> back an error that profile doesn't exist any more.
>
> But when I read the Spark SQL programming guide here:
> http://spark.apache.org/docs/1.5.2/sql-programming-guide.html
> It keeps mentioning Spark 1.5.2 still can work with Hive 0.12 meta-store,
> for example:
>
> Compatibility with Apache Hive
>
> Spark SQL is designed to be compatible with the Hive Metastore, SerDes and
> UDFs. Currently Hive SerDes and UDFs are based on Hive 1.2.1, and Spark SQL
> can be connected to different versions of Hive Metastore (from *0.12.0 *to
> 1.2.1. Also see
> http://spark.apache.org/docs/latest/sql-programming-guide.html#interacting-with-different-versions-of-hive-metastore
> ).
>
>
>
> spark.sql.hive.metastore.version 1.2.1 Version of the Hive metastore.
> Available options are *0.12.0* through 1.2.1.
> So I am very confusing about this. In our environment, the version of
> Hadoop from our vender still comes with Hive 0.12.0. But I plan to upgrade
> the Spark (which we deploy by ourselves) from 1.3.1 to 1.5.2. If possible,
> I wish Spark 1.5.2 still can use the Hive 0.12.0 metadata in Hadoop, but I
> don't think I have that option after I tried to compile it.
>
> Is this due to a bug in the document, or am I missing something?
>
> Thanks
>
> Yong
>
>
>
>

Reply via email to