You can put the hive-site.xml in $SPARK_HOME/conf directory.

This property can control where the data are located.

<property> <name>spark.sql.warehouse.dir</name>
<value>/home/myuser/spark-2.2.0/spark-warehouse </value>
<description>location of the warehouse directory</description> </property>

~Dylan



On Tue, Aug 29, 2017 at 1:53 PM, Andrés Ivaldi <iaiva...@gmail.com> wrote:

> Every comment are welcome
>
> If I´m not wrong it's because we are using percentile aggregation which
> comes with Hive support, apart from that nothing else.
>
>
> On Tue, Aug 29, 2017 at 11:23 AM, Jean Georges Perrin <j...@jgp.net> wrote:
>
>> Sorry if my comment is not helping, but... why do you need Hive? Can't
>> you save your aggregation using parquet for example?
>>
>> jg
>>
>>
>> > On Aug 29, 2017, at 08:34, Andrés Ivaldi <iaiva...@gmail.com> wrote:
>> >
>> > Hello, I'm using Spark API and with Hive support, I dont have a Hive
>> instance, just using Hive for some aggregation functions.
>> >
>> > The problem is that Hive crete the hive and metastore_db folder at the
>> temp folder, I want to change that location
>> >
>> > Regards.
>> >
>> > --
>> > Ing. Ivaldi Andres
>>
>>
>
>
> --
> Ing. Ivaldi Andres
>



-- 
Dylan Wan
Solution Architect - Enterprise Apps
Email: dylan....@gmail.com
My Blog: dylanwan.wordpress.com

Reply via email to