How did you register the table under spark-shell? Two things to notice:

1. To interact with Hive, HiveContext instead of SQLContext must be used.
2. `registerTempTable` doesn't persist the table into Hive metastore, and the table is lost after quitting spark-shell. Instead, you must use `saveAsTable`.

On 12/9/14 5:27 PM, Anas Mosaad wrote:
Thanks Cheng,

I thought spark-sql is using the same exact metastore, right? However, it didn't work as expected. Here's what I did.

In spark-shell, I loaded a csv files and registered the table, say countries.
Started the thrift server.
Connected using beeline. When I run show tables or !tables, I get empty list of tables as follow:

    /0: jdbc:hive2://localhost:10000> !tables/

    /+------------+--------------+-------------+-------------+----------+/

    /| TABLE_CAT  | TABLE_SCHEM  | TABLE_NAME  | TABLE_TYPE  | REMARKS  |/

    /+------------+--------------+-------------+-------------+----------+/

    /+------------+--------------+-------------+-------------+----------+/

    /0: jdbc:hive2://localhost:10000> show tables ;/

    /+---------+/

    /| result  |/

    /+---------+/

    /+---------+/

    /No rows selected (0.106 seconds)/

    /0: jdbc:hive2://localhost:10000> /



Kindly advice, what am I missing? I want to read the RDD using SQL from outside spark-shell (i.e. like any other relational database)


On Tue, Dec 9, 2014 at 11:05 AM, Cheng Lian <lian.cs....@gmail.com <mailto:lian.cs....@gmail.com>> wrote:

    Essentially, the Spark SQL JDBC Thrift server is just a Spark port
    of HiveServer2. You don't need to run Hive, but you do need a
    working Metastore.


    On 12/9/14 3:59 PM, Anas Mosaad wrote:
    Thanks Judy, this is exactly what I'm looking for. However, and
    plz forgive me if it's a dump question is: It seems to me that
    thrift is the same as hive2 JDBC driver, does this mean that
    starting thrift will start hive as well on the server?

    On Mon, Dec 8, 2014 at 9:11 PM, Judy Nash
    <judyn...@exchange.microsoft.com
    <mailto:judyn...@exchange.microsoft.com>> wrote:

        You can use thrift server for this purpose then test it with
        beeline.

        See doc:

        
https://spark.apache.org/docs/latest/sql-programming-guide.html#running-the-thrift-jdbc-server

        *From:*Anas Mosaad [mailto:anas.mos...@incorta.com
        <mailto:anas.mos...@incorta.com>]
        *Sent:* Monday, December 8, 2014 11:01 AM
        *To:* user@spark.apache.org <mailto:user@spark.apache.org>
        *Subject:* Spark-SQL JDBC driver

        Hello Everyone,

        I'm brand new to spark and was wondering if there's a JDBC
        driver to access spark-SQL directly. I'm running spark in
        standalone mode and don't have hadoop in this environment.

--
        *Best Regards/أطيب المنى,*

        *Anas Mosaad*




--
    *Best Regards/أطيب المنى,*
    *
    *
    *Anas Mosaad*
    *Incorta Inc.*
    *+20-100-743-4510*




--

*Best Regards/أطيب المنى,*
*
*
*Anas Mosaad*
*Incorta Inc.*
*+20-100-743-4510*

Reply via email to