[ 
https://issues.apache.org/jira/browse/SPARK-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tien-Dung LE updated SPARK-9280:
--------------------------------
    Description: 
In a spark-shell session, stopping a spark context and create a new spark 
context and hive context does not clean the spark sql configuration. More 
precisely, the new hive context still keeps the previous configuration 
settings. It would be great if someone can let us know how to avoid this 
situation.

{code:title=New hive context should not load the configurations from history}
case class Foo ( x: Int = (math.random * 1e3).toInt)
val foo = (1 to 100).map(i => Foo()).toDF
foo.saveAsParquetFile( "foo" )
sqlContext.setConf( "spark.sql.shuffle.partitions", "10")

sc.stop

val sparkConf2 = new org.apache.spark.SparkConf()
val sc2 = new org.apache.spark.SparkContext( sparkConf2 ) 
val sqlContext2 = new org.apache.spark.sql.hive.HiveContext( sc2 )

sqlContext2.getConf( "spark.sql.shuffle.partitions", "20") 
val foo2 = sqlContext2.parquetFile( "foo" )
sqlContext2.getConf( "spark.sql.shuffle.partitions", "30")
// expected 30 but got 10
{code}

  was:
In a spark-shell session, stopping a spark context and create a new spark 
context and hive context does not clean the spark sql configuration. More 
precisely, the new hive context still keeps the previous configuration 
settings. Here is a code to show this scenario.

{code:title=New hive context should not load the configurations from history}
case class Foo ( x: Int = (math.random * 1e3).toInt)
val foo = (1 to 100).map(i => Foo()).toDF
foo.saveAsParquetFile( "foo" )
sqlContext.setConf( "spark.sql.shuffle.partitions", "10")

sc.stop

val sparkConf2 = new org.apache.spark.SparkConf()
val sc2 = new org.apache.spark.SparkContext( sparkConf2 ) 
val sqlContext2 = new org.apache.spark.sql.hive.HiveContext( sc2 )

sqlContext2.getConf( "spark.sql.shuffle.partitions", "20") 
val foo2 = sqlContext2.parquetFile( "foo" )
sqlContext2.getConf( "spark.sql.shuffle.partitions", "30")
// expected 30 but got 10
{code}


> New HiveContext object unexpectedly loads configuration settings from history 
> ------------------------------------------------------------------------------
>
>                 Key: SPARK-9280
>                 URL: https://issues.apache.org/jira/browse/SPARK-9280
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.3.1
>            Reporter: Tien-Dung LE
>
> In a spark-shell session, stopping a spark context and create a new spark 
> context and hive context does not clean the spark sql configuration. More 
> precisely, the new hive context still keeps the previous configuration 
> settings. It would be great if someone can let us know how to avoid this 
> situation.
> {code:title=New hive context should not load the configurations from history}
> case class Foo ( x: Int = (math.random * 1e3).toInt)
> val foo = (1 to 100).map(i => Foo()).toDF
> foo.saveAsParquetFile( "foo" )
> sqlContext.setConf( "spark.sql.shuffle.partitions", "10")
> sc.stop
> val sparkConf2 = new org.apache.spark.SparkConf()
> val sc2 = new org.apache.spark.SparkContext( sparkConf2 ) 
> val sqlContext2 = new org.apache.spark.sql.hive.HiveContext( sc2 )
> sqlContext2.getConf( "spark.sql.shuffle.partitions", "20") 
> val foo2 = sqlContext2.parquetFile( "foo" )
> sqlContext2.getConf( "spark.sql.shuffle.partitions", "30")
> // expected 30 but got 10
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to