[jira] [Assigned] (SPARK-13403) HiveConf used for SparkSQL is not based on the Hadoop configuration

2016-02-19 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-13403:


Assignee: Apache Spark

> HiveConf used for SparkSQL is not based on the Hadoop configuration
> ---
>
> Key: SPARK-13403
> URL: https://issues.apache.org/jira/browse/SPARK-13403
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.6.0
>Reporter: Ryan Blue
>Assignee: Apache Spark
>
> The HiveConf instances used by HiveContext are not instantiated by passing in 
> the SparkContext's Hadoop conf and are instead based only on the config files 
> in the environment. Hadoop best practice is to instantiate just one 
> Configuration from the environment and then pass that conf when instantiating 
> others so that modifications aren't lost.
> Spark will set configuration variables that start with "spark.hadoop." from 
> spark-defaults.conf when creating {{sc.hadoopConfiguration}}, which are not 
> correctly passed to the HiveConf because of this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-13403) HiveConf used for SparkSQL is not based on the Hadoop configuration

2016-02-19 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-13403:


Assignee: (was: Apache Spark)

> HiveConf used for SparkSQL is not based on the Hadoop configuration
> ---
>
> Key: SPARK-13403
> URL: https://issues.apache.org/jira/browse/SPARK-13403
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.6.0
>Reporter: Ryan Blue
>
> The HiveConf instances used by HiveContext are not instantiated by passing in 
> the SparkContext's Hadoop conf and are instead based only on the config files 
> in the environment. Hadoop best practice is to instantiate just one 
> Configuration from the environment and then pass that conf when instantiating 
> others so that modifications aren't lost.
> Spark will set configuration variables that start with "spark.hadoop." from 
> spark-defaults.conf when creating {{sc.hadoopConfiguration}}, which are not 
> correctly passed to the HiveConf because of this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org