Re: Link existing Hive to Spark

2015-02-06 Thread Ashutosh Trivedi (MT2013030)
ok.Is there no way to specify it in code, when I create SparkConf ?


From: Todd Nist 
Sent: Friday, February 6, 2015 10:08 PM
To: Ashutosh Trivedi (MT2013030)
Cc: user@spark.apache.org
Subject: Re: Link existing Hive to Spark

You can always just add the entry, /etc/hadoop/conf to the appropriate 
classpath entry in $SPARK_HOME/conf/spark-defaults.conf.

On Fri, Feb 6, 2015 at 11:16 AM, Ashutosh Trivedi (MT2013030) 
mailto:ashutosh.triv...@iiitb.org>> wrote:

Hi Todd,

Thanks for the input.

I use IntelliJ as IDE and I create a SBT project. And in "build.sbt" I write 
all the dependencies in build.sbt. For example hive,spark-sql etc. These 
dependencies  stays in local  ivy2 repository after getting downloaded from 
maven central. Should I go in ivy2 and put hive-site.xml there?

If I build Spark from source code , I can put the file in conf/  but I am 
avoiding that.


From: Todd Nist mailto:tsind...@gmail.com>>
Sent: Friday, February 6, 2015 8:32 PM
To: Ashutosh Trivedi (MT2013030)
Cc: user@spark.apache.org<mailto:user@spark.apache.org>
Subject: Re: Link existing Hive to Spark

Hi Ashu,

Per the documents:

Configuration of Hive is done by placing your hive-site.xml file in conf/.

For example, you can place a something like this in your 
$SPARK_HOME/conf/hive-site.xml file:




  hive.metastore.uris
  
  thrift://HostNameHere:9083
  URI for client to contact metastore server



HTH.

-Todd



On Fri, Feb 6, 2015 at 4:12 AM, ashu 
mailto:ashutosh.triv...@iiitb.org>> wrote:
Hi,
I have Hive in development, I want to use it in Spark. Spark-SQL document
says the following
/
 Users who do not have an existing Hive deployment can still create a
HiveContext. When not configured by the hive-site.xml, the context
automatically creates metastore_db and warehouse in the current directory./

So I have existing hive set up and configured, how would I be able to use
the same in Spark?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Link-existing-Hive-to-Spark-tp21531.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: 
user-unsubscr...@spark.apache.org<mailto:user-unsubscr...@spark.apache.org>
For additional commands, e-mail: 
user-h...@spark.apache.org<mailto:user-h...@spark.apache.org>





Re: Link existing Hive to Spark

2015-02-06 Thread Todd Nist
You can always just add the entry, /etc/hadoop/conf to the appropriate
classpath entry in $SPARK_HOME/conf/spark-defaults.conf.

On Fri, Feb 6, 2015 at 11:16 AM, Ashutosh Trivedi (MT2013030) <
ashutosh.triv...@iiitb.org> wrote:

>  Hi Todd,
>
> Thanks for the input.
>
> I use IntelliJ as IDE and I create a SBT project. And in "build.sbt" I
> write all the dependencies in build.sbt. For example hive,spark-sql
> etc. These dependencies  stays in local  ivy2 repository after getting
> downloaded from maven central. Should I go in ivy2 and put hive-site.xml
> there?
>
> If I build Spark from source code , I can put the file in conf/  but I am
> avoiding that.
>  --
> *From:* Todd Nist 
> *Sent:* Friday, February 6, 2015 8:32 PM
> *To:* Ashutosh Trivedi (MT2013030)
> *Cc:* user@spark.apache.org
> *Subject:* Re: Link existing Hive to Spark
>
>   Hi Ashu,
>
>  Per the documents:
>
>  Configuration of Hive is done by placing your hive-site.xml file in conf/
>> .
>
>
>  For example, you can place a something like this in your
> $SPARK_HOME/conf/hive-site.xml file:
>
> 
> 
>   hive.metastore.uris
>   **
>   thrift://*HostNameHere*:9083
>   URI for client to contact metastore server
> 
> 
>
>  HTH.
>
>  -Todd
>
>
>
> On Fri, Feb 6, 2015 at 4:12 AM, ashu  wrote:
>
>> Hi,
>> I have Hive in development, I want to use it in Spark. Spark-SQL document
>> says the following
>> /
>>  Users who do not have an existing Hive deployment can still create a
>> HiveContext. When not configured by the hive-site.xml, the context
>> automatically creates metastore_db and warehouse in the current
>> directory./
>>
>> So I have existing hive set up and configured, how would I be able to use
>> the same in Spark?
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/Link-existing-Hive-to-Spark-tp21531.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>


Re: Link existing Hive to Spark

2015-02-06 Thread Ashutosh Trivedi (MT2013030)
Hi Todd,

Thanks for the input.

I use IntelliJ as IDE and I create a SBT project. And in "build.sbt" I write 
all the dependencies in build.sbt. For example hive,spark-sql etc. These 
dependencies  stays in local  ivy2 repository after getting downloaded from 
maven central. Should I go in ivy2 and put hive-site.xml there?

If I build Spark from source code , I can put the file in conf/  but I am 
avoiding that.


From: Todd Nist 
Sent: Friday, February 6, 2015 8:32 PM
To: Ashutosh Trivedi (MT2013030)
Cc: user@spark.apache.org
Subject: Re: Link existing Hive to Spark

Hi Ashu,

Per the documents:

Configuration of Hive is done by placing your hive-site.xml file in conf/.

For example, you can place a something like this in your 
$SPARK_HOME/conf/hive-site.xml file:




  hive.metastore.uris
  
  thrift://HostNameHere:9083
  URI for client to contact metastore server



HTH.

-Todd



On Fri, Feb 6, 2015 at 4:12 AM, ashu 
mailto:ashutosh.triv...@iiitb.org>> wrote:
Hi,
I have Hive in development, I want to use it in Spark. Spark-SQL document
says the following
/
 Users who do not have an existing Hive deployment can still create a
HiveContext. When not configured by the hive-site.xml, the context
automatically creates metastore_db and warehouse in the current directory./

So I have existing hive set up and configured, how would I be able to use
the same in Spark?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Link-existing-Hive-to-Spark-tp21531.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: 
user-unsubscr...@spark.apache.org<mailto:user-unsubscr...@spark.apache.org>
For additional commands, e-mail: 
user-h...@spark.apache.org<mailto:user-h...@spark.apache.org>




Re: Link existing Hive to Spark

2015-02-06 Thread Todd Nist
Hi Ashu,

Per the documents:

Configuration of Hive is done by placing your hive-site.xml file in conf/.


For example, you can place a something like this in your
$SPARK_HOME/conf/hive-site.xml file:



  hive.metastore.uris
  **
  thrift://*HostNameHere*:9083
  URI for client to contact metastore server



HTH.

-Todd



On Fri, Feb 6, 2015 at 4:12 AM, ashu  wrote:

> Hi,
> I have Hive in development, I want to use it in Spark. Spark-SQL document
> says the following
> /
>  Users who do not have an existing Hive deployment can still create a
> HiveContext. When not configured by the hive-site.xml, the context
> automatically creates metastore_db and warehouse in the current directory./
>
> So I have existing hive set up and configured, how would I be able to use
> the same in Spark?
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Link-existing-Hive-to-Spark-tp21531.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>


Link existing Hive to Spark

2015-02-06 Thread ashu
Hi,
I have Hive in development, I want to use it in Spark. Spark-SQL document
says the following
/
 Users who do not have an existing Hive deployment can still create a
HiveContext. When not configured by the hive-site.xml, the context
automatically creates metastore_db and warehouse in the current directory./

So I have existing hive set up and configured, how would I be able to use
the same in Spark?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Link-existing-Hive-to-Spark-tp21531.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org