[jira] [Commented] (SPARK-11474) Options to jdbc load are lower cased

2015-11-04 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-11474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14990335#comment-14990335
 ] 

Apache Spark commented on SPARK-11474:
--

User 'huaxingao' has created a pull request for this issue:
https://github.com/apache/spark/pull/9473

> Options to jdbc load are lower cased
> 
>
> Key: SPARK-11474
> URL: https://issues.apache.org/jira/browse/SPARK-11474
> Project: Spark
>  Issue Type: Bug
>  Components: Input/Output
>Affects Versions: 1.5.1
> Environment: Linux & Mac
>Reporter: Stephen Samuel
>Priority: Minor
>
> We recently upgraded from spark 1.3.0 to 1.5.1 and one of the features we 
> wanted to take advantage of was the fetchSize added to the jdbc data frame 
> reader.
> In 1.5.1 there appears to be a bug or regression, whereby an options map has 
> its keys lowercased. This means the existing properties from prior to 1.4 are 
> ok, such as dbtable, url and driver, but the newer fetchSize gets converted 
> to fetchsize.
> To re-produce:
> val conf = new SparkConf(true).setMaster("local").setAppName("fetchtest")
> val sc = new SparkContext(conf)
> val sql = new SQLContext(sc)
> val options = Map("url" -> , "driver" -> , "fetchSize" -> )
> val df = sql.load("jdbc", options)
> Breakpoint at line 371 in JDBCRDD and you'll see the options are all 
> lowercased, so:
> val fetchSize = properties.getProperty("fetchSize", "0").toInt
> results in 0
> Now I know sql.load is deprecated, but this might be occuring on other 
> methods too. The workaround is to use the java.util.Properties overload, 
> which keeps the case sensitive keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-11474) Options to jdbc load are lower cased

2015-11-04 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-11474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14990193#comment-14990193
 ] 

Apache Spark commented on SPARK-11474:
--

User 'huaxingao' has created a pull request for this issue:
https://github.com/apache/spark/pull/9461

> Options to jdbc load are lower cased
> 
>
> Key: SPARK-11474
> URL: https://issues.apache.org/jira/browse/SPARK-11474
> Project: Spark
>  Issue Type: Bug
>  Components: Input/Output
>Affects Versions: 1.5.1
> Environment: Linux & Mac
>Reporter: Stephen Samuel
>Priority: Minor
>
> We recently upgraded from spark 1.3.0 to 1.5.1 and one of the features we 
> wanted to take advantage of was the fetchSize added to the jdbc data frame 
> reader.
> In 1.5.1 there appears to be a bug or regression, whereby an options map has 
> its keys lowercased. This means the existing properties from prior to 1.4 are 
> ok, such as dbtable, url and driver, but the newer fetchSize gets converted 
> to fetchsize.
> To re-produce:
> val conf = new SparkConf(true).setMaster("local").setAppName("fetchtest")
> val sc = new SparkContext(conf)
> val sql = new SQLContext(sc)
> val options = Map("url" -> , "driver" -> , "fetchSize" -> )
> val df = sql.load("jdbc", options)
> Breakpoint at line 371 in JDBCRDD and you'll see the options are all 
> lowercased, so:
> val fetchSize = properties.getProperty("fetchSize", "0").toInt
> results in 0
> Now I know sql.load is deprecated, but this might be occuring on other 
> methods too. The workaround is to use the java.util.Properties overload, 
> which keeps the case sensitive keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-11474) Options to jdbc load are lower cased

2015-11-04 Thread Huaxin Gao (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-11474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14989094#comment-14989094
 ] 

Huaxin Gao commented on SPARK-11474:


Did PR  https://github.com/apache/spark/pull/9461
Somehow it didn't get automatically linked to the PR.  Will fix the problem 
tomorrow. 


> Options to jdbc load are lower cased
> 
>
> Key: SPARK-11474
> URL: https://issues.apache.org/jira/browse/SPARK-11474
> Project: Spark
>  Issue Type: Bug
>  Components: Input/Output
>Affects Versions: 1.5.1
> Environment: Linux & Mac
>Reporter: Stephen Samuel
>Priority: Minor
>
> We recently upgraded from spark 1.3.0 to 1.5.1 and one of the features we 
> wanted to take advantage of was the fetchSize added to the jdbc data frame 
> reader.
> In 1.5.1 there appears to be a bug or regression, whereby an options map has 
> its keys lowercased. This means the existing properties from prior to 1.4 are 
> ok, such as dbtable, url and driver, but the newer fetchSize gets converted 
> to fetchsize.
> To re-produce:
> val conf = new SparkConf(true).setMaster("local").setAppName("fetchtest")
> val sc = new SparkContext(conf)
> val sql = new SQLContext(sc)
> val options = Map("url" -> , "driver" -> , "fetchSize" -> )
> val df = sql.load("jdbc", options)
> Breakpoint at line 371 in JDBCRDD and you'll see the options are all 
> lowercased, so:
> val fetchSize = properties.getProperty("fetchSize", "0").toInt
> results in 0
> Now I know sql.load is deprecated, but this might be occuring on other 
> methods too. The workaround is to use the java.util.Properties overload, 
> which keeps the case sensitive keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-11474) Options to jdbc load are lower cased

2015-11-03 Thread Huaxin Gao (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-11474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14988939#comment-14988939
 ] 

Huaxin Gao commented on SPARK-11474:


In DefaultDataSource.scala, it has 

  override def createRelation(
  sqlContext: SQLContext,
  parameters: Map[String, String]): BaseRelation 

The parameters is CaseInsensitiveMap. 
After this line
parameters.foreach(kv => properties.setProperty(kv._1, kv._2))
properties is set to all lower case key/value pairs and fetchSize becomes 
fetchsize.
However, in compute method in JDBCRDD, it has
 val fetchSize = properties.getProperty("fetchSize", "0").toInt
so fetchSize value is always 0 and never gets set correctly. 

I will have a pull request to fix this problem. 

> Options to jdbc load are lower cased
> 
>
> Key: SPARK-11474
> URL: https://issues.apache.org/jira/browse/SPARK-11474
> Project: Spark
>  Issue Type: Bug
>  Components: Input/Output
>Affects Versions: 1.5.1
> Environment: Linux & Mac
>Reporter: Stephen Samuel
>Priority: Minor
>
> We recently upgraded from spark 1.3.0 to 1.5.1 and one of the features we 
> wanted to take advantage of was the fetchSize added to the jdbc data frame 
> reader.
> In 1.5.1 there appears to be a bug or regression, whereby an options map has 
> its keys lowercased. This means the existing properties from prior to 1.4 are 
> ok, such as dbtable, url and driver, but the newer fetchSize gets converted 
> to fetchsize.
> To re-produce:
> val conf = new SparkConf(true).setMaster("local").setAppName("fetchtest")
> val sc = new SparkContext(conf)
> val sql = new SQLContext(sc)
> val options = Map("url" -> , "driver" -> , "fetchSize" -> )
> val df = sql.load("jdbc", options)
> Breakpoint at line 371 in JDBCRDD and you'll see the options are all 
> lowercased, so:
> val fetchSize = properties.getProperty("fetchSize", "0").toInt
> results in 0
> Now I know sql.load is deprecated, but this might be occuring on other 
> methods too. The workaround is to use the java.util.Properties overload, 
> which keeps the case sensitive keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org