Re: Link existing Hive to Spark

2015-02-06 Thread Ashutosh Trivedi (MT2013030)
Hi Todd,

Thanks for the input.

I use IntelliJ as IDE and I create a SBT project. And in build.sbt I write 
all the dependencies in build.sbt. For example hive,spark-sql etc. These 
dependencies  stays in local  ivy2 repository after getting downloaded from 
maven central. Should I go in ivy2 and put hive-site.xml there?

If I build Spark from source code , I can put the file in conf/  but I am 
avoiding that.


From: Todd Nist tsind...@gmail.com
Sent: Friday, February 6, 2015 8:32 PM
To: Ashutosh Trivedi (MT2013030)
Cc: user@spark.apache.org
Subject: Re: Link existing Hive to Spark

Hi Ashu,

Per the documents:

Configuration of Hive is done by placing your hive-site.xml file in conf/.

For example, you can place a something like this in your 
$SPARK_HOME/conf/hive-site.xml file:


configuration
property
  namehive.metastore.uris/name
  !-- Ensure that the following statement points to the Hive Metastore URI in 
your cluster --
  valuethrift://HostNameHere:9083/value
  descriptionURI for client to contact metastore server/description
/property
/configuration

HTH.

-Todd



On Fri, Feb 6, 2015 at 4:12 AM, ashu 
ashutosh.triv...@iiitb.orgmailto:ashutosh.triv...@iiitb.org wrote:
Hi,
I have Hive in development, I want to use it in Spark. Spark-SQL document
says the following
/
 Users who do not have an existing Hive deployment can still create a
HiveContext. When not configured by the hive-site.xml, the context
automatically creates metastore_db and warehouse in the current directory./

So I have existing hive set up and configured, how would I be able to use
the same in Spark?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Link-existing-Hive-to-Spark-tp21531.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: 
user-unsubscr...@spark.apache.orgmailto:user-unsubscr...@spark.apache.org
For additional commands, e-mail: 
user-h...@spark.apache.orgmailto:user-h...@spark.apache.org




Re: Link existing Hive to Spark

2015-02-06 Thread Todd Nist
Hi Ashu,

Per the documents:

Configuration of Hive is done by placing your hive-site.xml file in conf/.


For example, you can place a something like this in your
$SPARK_HOME/conf/hive-site.xml file:

configuration
property
  namehive.metastore.uris/name
  *!-- Ensure that the following statement points to the Hive
Metastore URI in your cluster --*
  valuethrift://*HostNameHere*:9083/value
  descriptionURI for client to contact metastore server/description
/property
/configuration

HTH.

-Todd



On Fri, Feb 6, 2015 at 4:12 AM, ashu ashutosh.triv...@iiitb.org wrote:

 Hi,
 I have Hive in development, I want to use it in Spark. Spark-SQL document
 says the following
 /
  Users who do not have an existing Hive deployment can still create a
 HiveContext. When not configured by the hive-site.xml, the context
 automatically creates metastore_db and warehouse in the current directory./

 So I have existing hive set up and configured, how would I be able to use
 the same in Spark?



 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/Link-existing-Hive-to-Spark-tp21531.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org




Re: Link existing Hive to Spark

2015-02-06 Thread Ashutosh Trivedi (MT2013030)
ok.Is there no way to specify it in code, when I create SparkConf ?


From: Todd Nist tsind...@gmail.com
Sent: Friday, February 6, 2015 10:08 PM
To: Ashutosh Trivedi (MT2013030)
Cc: user@spark.apache.org
Subject: Re: Link existing Hive to Spark

You can always just add the entry, /etc/hadoop/conf to the appropriate 
classpath entry in $SPARK_HOME/conf/spark-defaults.conf.

On Fri, Feb 6, 2015 at 11:16 AM, Ashutosh Trivedi (MT2013030) 
ashutosh.triv...@iiitb.orgmailto:ashutosh.triv...@iiitb.org wrote:

Hi Todd,

Thanks for the input.

I use IntelliJ as IDE and I create a SBT project. And in build.sbt I write 
all the dependencies in build.sbt. For example hive,spark-sql etc. These 
dependencies  stays in local  ivy2 repository after getting downloaded from 
maven central. Should I go in ivy2 and put hive-site.xml there?

If I build Spark from source code , I can put the file in conf/  but I am 
avoiding that.


From: Todd Nist tsind...@gmail.commailto:tsind...@gmail.com
Sent: Friday, February 6, 2015 8:32 PM
To: Ashutosh Trivedi (MT2013030)
Cc: user@spark.apache.orgmailto:user@spark.apache.org
Subject: Re: Link existing Hive to Spark

Hi Ashu,

Per the documents:

Configuration of Hive is done by placing your hive-site.xml file in conf/.

For example, you can place a something like this in your 
$SPARK_HOME/conf/hive-site.xml file:


configuration
property
  namehive.metastore.uris/name
  !-- Ensure that the following statement points to the Hive Metastore URI in 
your cluster --
  valuethrift://HostNameHere:9083/value
  descriptionURI for client to contact metastore server/description
/property
/configuration

HTH.

-Todd



On Fri, Feb 6, 2015 at 4:12 AM, ashu 
ashutosh.triv...@iiitb.orgmailto:ashutosh.triv...@iiitb.org wrote:
Hi,
I have Hive in development, I want to use it in Spark. Spark-SQL document
says the following
/
 Users who do not have an existing Hive deployment can still create a
HiveContext. When not configured by the hive-site.xml, the context
automatically creates metastore_db and warehouse in the current directory./

So I have existing hive set up and configured, how would I be able to use
the same in Spark?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Link-existing-Hive-to-Spark-tp21531.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: 
user-unsubscr...@spark.apache.orgmailto:user-unsubscr...@spark.apache.org
For additional commands, e-mail: 
user-h...@spark.apache.orgmailto:user-h...@spark.apache.org