[jira] [Commented] (SPARK-26077) Reserved SQL words are not escaped by JDBC writer for table name

2018-11-20 Thread Eugene Golovan (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-26077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693046#comment-16693046
 ] 

Eugene Golovan commented on SPARK-26077:


[~maropu] Sure. Have a look please. When running unit tests I noticed one 
thing.  dbtable option may be a subquery as well. I added workaround for this 
as well but I do not like it too much. In any case if you have suggestions, you 
are welcome!

> Reserved SQL words are not escaped by JDBC writer for table name
> 
>
> Key: SPARK-26077
> URL: https://issues.apache.org/jira/browse/SPARK-26077
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.2
>Reporter: Eugene Golovan
>Priority: Major
>
> This bug is similar to SPARK-16387 but this time table name is not escaped.
> How to reproduce:
> 1/ Start spark shell with mysql connector
> spark-shell --jars ./mysql-connector-java-8.0.13.jar
>  
> 2/ Execute next code
>  
> import spark.implicits._
> (spark
> .createDataset(Seq("a","b","c"))
> .toDF("order")
> .write
> .format("jdbc")
> .option("url", s"jdbc:mysql://root@localhost:3306/test")
> .option("driver", "com.mysql.cj.jdbc.Driver")
> .option("dbtable", "condition")
> .save)
>  
> , where condition - is reserved word.
>  
> Error message:
>  
> java.sql.SQLSyntaxErrorException: You have an error in your SQL syntax; check 
> the manual that corresponds to your MySQL server version for the right syntax 
> to use near 'condition (`order` TEXT )' at line 1
>  at 
> com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
>  at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
>  at 
> com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
>  at 
> com.mysql.cj.jdbc.StatementImpl.executeUpdateInternal(StatementImpl.java:1355)
>  at 
> com.mysql.cj.jdbc.StatementImpl.executeLargeUpdate(StatementImpl.java:2128)
>  at com.mysql.cj.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1264)
>  at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.createTable(JdbcUtils.scala:844)
>  at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:95)
>  at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
>  at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
>  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
>  at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
>  at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
>  at 
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:656)
>  at 
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:656)
>  at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
>  at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:656)
>  at 
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
>  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
>  ... 59 elided
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-26077) Reserved SQL words are not escaped by JDBC writer for table name

2018-11-20 Thread Apache Spark (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-26077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693037#comment-16693037
 ] 

Apache Spark commented on SPARK-26077:
--

User 'golovan' has created a pull request for this issue:
https://github.com/apache/spark/pull/23094

> Reserved SQL words are not escaped by JDBC writer for table name
> 
>
> Key: SPARK-26077
> URL: https://issues.apache.org/jira/browse/SPARK-26077
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.2
>Reporter: Eugene Golovan
>Priority: Major
>
> This bug is similar to SPARK-16387 but this time table name is not escaped.
> How to reproduce:
> 1/ Start spark shell with mysql connector
> spark-shell --jars ./mysql-connector-java-8.0.13.jar
>  
> 2/ Execute next code
>  
> import spark.implicits._
> (spark
> .createDataset(Seq("a","b","c"))
> .toDF("order")
> .write
> .format("jdbc")
> .option("url", s"jdbc:mysql://root@localhost:3306/test")
> .option("driver", "com.mysql.cj.jdbc.Driver")
> .option("dbtable", "condition")
> .save)
>  
> , where condition - is reserved word.
>  
> Error message:
>  
> java.sql.SQLSyntaxErrorException: You have an error in your SQL syntax; check 
> the manual that corresponds to your MySQL server version for the right syntax 
> to use near 'condition (`order` TEXT )' at line 1
>  at 
> com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
>  at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
>  at 
> com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
>  at 
> com.mysql.cj.jdbc.StatementImpl.executeUpdateInternal(StatementImpl.java:1355)
>  at 
> com.mysql.cj.jdbc.StatementImpl.executeLargeUpdate(StatementImpl.java:2128)
>  at com.mysql.cj.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1264)
>  at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.createTable(JdbcUtils.scala:844)
>  at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:95)
>  at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
>  at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
>  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
>  at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
>  at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
>  at 
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:656)
>  at 
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:656)
>  at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
>  at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:656)
>  at 
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
>  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
>  ... 59 elided
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-26077) Reserved SQL words are not escaped by JDBC writer for table name

2018-11-19 Thread Takeshi Yamamuro (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-26077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692546#comment-16692546
 ] 

Takeshi Yamamuro commented on SPARK-26077:
--

Thanks for your reporting! It seems the master doesn't quote a JDBC table name 
now. Are u interested in the contribution on this? I think this is a kind of 
starter issues, maybe...

> Reserved SQL words are not escaped by JDBC writer for table name
> 
>
> Key: SPARK-26077
> URL: https://issues.apache.org/jira/browse/SPARK-26077
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.2
>Reporter: Eugene Golovan
>Priority: Major
>
> This bug is similar to SPARK-16387 but this time table name is not escaped.
> How to reproduce:
> 1/ Start spark shell with mysql connector
> spark-shell --jars ./mysql-connector-java-8.0.13.jar
>  
> 2/ Execute next code
>  
> import spark.implicits._
> (spark
> .createDataset(Seq("a","b","c"))
> .toDF("order")
> .write
> .format("jdbc")
> .option("url", s"jdbc:mysql://root@localhost:3306/test")
> .option("driver", "com.mysql.cj.jdbc.Driver")
> .option("dbtable", "condition")
> .save)
>  
> , where condition - is reserved word.
>  
> Error message:
>  
> java.sql.SQLSyntaxErrorException: You have an error in your SQL syntax; check 
> the manual that corresponds to your MySQL server version for the right syntax 
> to use near 'condition (`order` TEXT )' at line 1
>  at 
> com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
>  at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
>  at 
> com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
>  at 
> com.mysql.cj.jdbc.StatementImpl.executeUpdateInternal(StatementImpl.java:1355)
>  at 
> com.mysql.cj.jdbc.StatementImpl.executeLargeUpdate(StatementImpl.java:2128)
>  at com.mysql.cj.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1264)
>  at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.createTable(JdbcUtils.scala:844)
>  at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:95)
>  at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
>  at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
>  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
>  at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
>  at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
>  at 
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:656)
>  at 
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:656)
>  at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
>  at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:656)
>  at 
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
>  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
>  ... 59 elided
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-26077) Reserved SQL words are not escaped by JDBC writer for table name

2018-11-15 Thread Hyukjin Kwon (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-26077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16688967#comment-16688967
 ] 

Hyukjin Kwon commented on SPARK-26077:
--

cc'ing [~maropu] FYI

> Reserved SQL words are not escaped by JDBC writer for table name
> 
>
> Key: SPARK-26077
> URL: https://issues.apache.org/jira/browse/SPARK-26077
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.2
>Reporter: Eugene Golovan
>Priority: Major
>
> This bug is similar to SPARK-16387 but this time table name is not escaped.
> How to reproduce:
> 1/ Start spark shell with mysql connector
> spark-shell --jars ./mysql-connector-java-8.0.13.jar
>  
> 2/ Execute next code
>  
> import spark.implicits._
> (spark
> .createDataset(Seq("a","b","c"))
> .toDF("order")
> .write
> .format("jdbc")
> .option("url", s"jdbc:mysql://root@localhost:3306/test")
> .option("driver", "com.mysql.cj.jdbc.Driver")
> .option("dbtable", "condition")
> .save)
>  
> , where condition - is reserved word.
>  
> Error message:
>  
> java.sql.SQLSyntaxErrorException: You have an error in your SQL syntax; check 
> the manual that corresponds to your MySQL server version for the right syntax 
> to use near 'condition (`order` TEXT )' at line 1
>  at 
> com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
>  at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
>  at 
> com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
>  at 
> com.mysql.cj.jdbc.StatementImpl.executeUpdateInternal(StatementImpl.java:1355)
>  at 
> com.mysql.cj.jdbc.StatementImpl.executeLargeUpdate(StatementImpl.java:2128)
>  at com.mysql.cj.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1264)
>  at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.createTable(JdbcUtils.scala:844)
>  at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:95)
>  at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
>  at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
>  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
>  at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
>  at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
>  at 
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:656)
>  at 
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:656)
>  at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
>  at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:656)
>  at 
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
>  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
>  ... 59 elided
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org