On the spark 1.0-jdbc branch, there is a thrift server and a beehive CLI that 
roughly keeps the shark style, corresponding to the shark server and shark CLI, 
respectively. Check out 
https://github.com/apache/spark/blob/branch-1.0-jdbc/docs/sql-programming-guide.md
 for more information.

Du

From: Nicholas Chammas 
<nicholas.cham...@gmail.com<mailto:nicholas.cham...@gmail.com>>
Reply-To: "user@spark.apache.org<mailto:user@spark.apache.org>" 
<user@spark.apache.org<mailto:user@spark.apache.org>>
Date: Thursday, July 10, 2014 at 11:31 AM
To: user <user@spark.apache.org<mailto:user@spark.apache.org>>
Subject: Re: Difference between SparkSQL and shark


In short, Spark SQL is the future, built from the ground up. Shark was built as 
a drop-in replacement for Hive, will be retired, and will perhaps be replaced 
by a future initiative to run Hive on 
Spark<https://issues.apache.org/jira/browse/HIVE-7292>.

More info:

  *   
http://databricks.com/blog/2014/03/26/spark-sql-manipulating-structured-data-using-spark-2.html
  *   
http://databricks.com/blog/2014/07/01/shark-spark-sql-hive-on-spark-and-the-future-of-sql-on-spark.html

Nick

​


On Thu, Jul 10, 2014 at 1:50 PM, hsy...@gmail.com<mailto:hsy...@gmail.com> 
<hsy...@gmail.com<mailto:hsy...@gmail.com>> wrote:
I have a newbie question. What is the difference between SparkSQL and Shark?


Best,
Siyuan

Reply via email to