[ https://issues.apache.org/jira/browse/SPARK-6913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14519124#comment-14519124 ]
Vyacheslav Baranov commented on SPARK-6913: ------------------------------------------- The problem is in java.sql.DriverManager that doesn't see the drivers loaded by ClassLoaders other than bootstrap ClassLoader. The solution would be to create a proxy driver included in Spark assembly that forwards all requests to wrapped driver. I have a working fix for this issue and going to make pull request soon. > "No suitable driver found" loading JDBC dataframe using driver added by > through SparkContext.addJar > --------------------------------------------------------------------------------------------------- > > Key: SPARK-6913 > URL: https://issues.apache.org/jira/browse/SPARK-6913 > Project: Spark > Issue Type: Bug > Components: SQL > Reporter: Evan Yu > > val sc = new SparkContext(conf) > sc.addJar("J:\mysql-connector-java-5.1.35.jar") > val df = > sqlContext.jdbc("jdbc:mysql://localhost:3000/test_db?user=abc&password=123", > "table1") > df.show() > Folloing error: > 2015-04-14 17:04:39,541 [task-result-getter-0] WARN > org.apache.spark.scheduler.TaskSetManager - Lost task 0.0 in stage 0.0 (TID > 0, dev1.test.dc2.com): java.sql.SQLException: No suitable driver found for > jdbc:mysql://localhost:3000/test_db?user=abc&password=123 > at java.sql.DriverManager.getConnection(DriverManager.java:689) > at java.sql.DriverManager.getConnection(DriverManager.java:270) > at > org.apache.spark.sql.jdbc.JDBCRDD$$anonfun$getConnector$1.apply(JDBCRDD.scala:158) > at > org.apache.spark.sql.jdbc.JDBCRDD$$anonfun$getConnector$1.apply(JDBCRDD.scala:150) > at org.apache.spark.sql.jdbc.JDBCRDD$$anon$1.<init>(JDBCRDD.scala:317) > at org.apache.spark.sql.jdbc.JDBCRDD.compute(JDBCRDD.scala:309) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) > at > org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) > at > org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) > at org.apache.spark.scheduler.Task.run(Task.scala:64) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org