I can't comment on plans for Spark SQL's support for Hive, but several
companies are porting Hive itself onto Spark:

http://blog.cloudera.com/blog/2014/11/apache-hive-on-apache-spark-the-first-demo/

I'm not sure if they are leveraging the old Shark code base or not, but it
appears to be a fresh effort.

dean

Dean Wampler, Ph.D.
Author: Programming Scala, 2nd Edition
<http://shop.oreilly.com/product/0636920033073.do> (O'Reilly)
Typesafe <http://typesafe.com>
@deanwampler <http://twitter.com/deanwampler>
http://polyglotprogramming.com

On Fri, Nov 21, 2014 at 2:51 PM, Zhan Zhang <zhaz...@gmail.com> wrote:

> Now Spark and hive integration is a very nice feature. But I am wondering
> what the long term roadmap is for spark integration with hive. Both of
> these
> two projects are undergoing fast improvement and changes. Currently, my
> understanding is that spark hive sql part relies on hive meta store and
> basic parser to operate, and the thrift-server intercept hive query and
> replace it with its own engine.
>
> With every release of hive, there need a significant effort on spark part
> to
> support it.
>
> For the metastore part, we may possibly replace it with hcatalog. But given
> the dependency of other parts on hive, e.g., metastore, thriftserver,
> hcatlog may not be able to help much.
>
> Does anyone have any insight or idea in mind?
>
> Thanks.
>
> Zhan Zhang
>
>
>
> --
> View this message in context:
> http://apache-spark-developers-list.1001551.n3.nabble.com/How-spark-and-hive-integrate-in-long-term-tp9482.html
> Sent from the Apache Spark Developers List mailing list archive at
> Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
>

Reply via email to