GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/16678
[SPARK-19209] [WIP] JDBC: Fix "No suitable driver" on the first try
### What changes were proposed in this pull request?
This PR is to revert some changes made in
https://github.com/apache/spark/pull/15292
> @darabos reported Spark 2.1.0 issued the `No suitable driver` exception
at the first time when reading a JDBC data source but simply re-executing the
same command a second time "fixes" the `No suitable driver` error. This only
happens when the Hive support is enabled.
Based on my understanding, the problem is `java.sql.DriverManager` class
that can't access drivers loaded by Spark ClassLoader. The changes made in this
PR does not sounds a solution for the reported issue. It could be caused by the
other code changes in 2.1 that change the current ClassLoader
@darabos Could you please help us try it in your local environment? Thanks!
```
$ ~/spark-2.1.0/bin/spark-shell --jars org.xerial.sqlite-jdbc-3.8.11.2.jar
--driver-class-path org.xerial.sqlite-jdbc-3.8.11.2.jar
[...]
scala> spark.read.format("jdbc").option("url",
"jdbc:sqlite:").option("dbtable", "x").load
java.sql.SQLException: No suitable driver
at java.sql.DriverManager.getDriver(DriverManager.java:315)
at
org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84)
at
org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84)
at scala.Option.getOrElse(Option.scala:121)
at
org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:83)
at
org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:34)
at
org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:32)
at
org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:330)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125)
... 48 elided
scala> spark.read.format("jdbc").option("url",
"jdbc:sqlite:").option("dbtable", "x").load
java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (no
such table: x)
```
### How was this patch tested?
@darabos Could you make a manual test and see whether this changes can
resolve your issue?
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/gatorsmile/spark jdbcDriver
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/16678.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #16678
----
commit 9f5b11ba8c30a6908186ba392c090e4e9439d21d
Author: gatorsmile <[email protected]>
Date: 2017-01-23T09:01:06Z
try1
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]