You can specify the hive-site.xml in spark-submit command using --files
option to make sure that the Spark job is referring to the hive metastore
you are interested in

spark-submit --files /path/to/hive-site.xml



On Sat, Aug 18, 2018 at 1:59 AM Patrick Alwell <palw...@hortonworks.com>
wrote:

> You probably need to take a look at your hive-site.xml and see what the
> location is for the Hive Metastore. As for beeline, you can explicitly use
> an instance of Hive server by passing in the JDBC url to the hiveServer
> when you launch the client; e.g. beeline –u “jdbc://example.com:5432”
>
>
>
> Try taking a look at this
> https://jaceklaskowski.gitbooks.io/mastering-spark-sql/spark-sql-hive-metastore.html
>
>
>
> There should be conf settings you can update to make sure you are using
> the same metastore as the instance of HiveServer.
>
>
>
> Hive Wiki is a great resource as well ☺
>
>
>
> *From: *Fabio Wada <fabio.w...@servix.com.INVALID>
> *Date: *Friday, August 17, 2018 at 11:22 AM
> *To: *"user@spark.apache.org" <user@spark.apache.org>
> *Subject: *Two different Hive instances running
>
>
>
> Hi,
>
>
>
> I am executing a insert into Hive table using SparkSession in Java. When I
> execute select via beeline, I don't see these inserted data. And when I
> insert data using beeline I don't see via my program using SparkSession.
>
>
>
> It's looks like there are different Hive instances running.
>
>
>
> How can I point to same Hive instance? Using SparkSession and beeline.
>
>
>
> Thanks
>
> [image: mage removed by sender.]ᐧ
>

Reply via email to