[ 
https://issues.apache.org/jira/browse/SPARK-23186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenchen Fan updated SPARK-23186:
--------------------------------
    Fix Version/s: 2.2.2

> Initialize DriverManager first before loading Drivers
> -----------------------------------------------------
>
>                 Key: SPARK-23186
>                 URL: https://issues.apache.org/jira/browse/SPARK-23186
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.2.1
>            Reporter: Dongjoon Hyun
>            Assignee: Dongjoon Hyun
>            Priority: Major
>             Fix For: 2.2.2, 2.3.0
>
>
> Since some JDBC Drivers have class initialization code to call 
> `DriverManager`, we need to initialize DriverManager first in order to avoid 
> potential deadlock situation like the following or STORM-2527.
> {code}
> Thread 9587: (state = BLOCKED)
>  - 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(java.lang.reflect.Constructor,
>  java.lang.Object[]) @bci=0 (Compiled frame; information may be imprecise)
>  - sun.reflect.NativeConstructorAccessorImpl.newInstance(java.lang.Object[]) 
> @bci=85, line=62 (Compiled frame)
>  - 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(java.lang.Object[]) 
> @bci=5, line=45 (Compiled frame)
>  - java.lang.reflect.Constructor.newInstance(java.lang.Object[]) @bci=79, 
> line=423 (Compiled frame)
>  - java.lang.Class.newInstance() @bci=138, line=442 (Compiled frame)
>  - java.util.ServiceLoader$LazyIterator.nextService() @bci=119, line=380 
> (Interpreted frame)
>  - java.util.ServiceLoader$LazyIterator.next() @bci=11, line=404 (Interpreted 
> frame)
>  - java.util.ServiceLoader$1.next() @bci=37, line=480 (Interpreted frame)
>  - java.sql.DriverManager$2.run() @bci=21, line=603 (Interpreted frame)
>  - java.sql.DriverManager$2.run() @bci=1, line=583 (Interpreted frame)
>  - 
> java.security.AccessController.doPrivileged(java.security.PrivilegedAction) 
> @bci=0 (Compiled frame)
>  - java.sql.DriverManager.loadInitialDrivers() @bci=27, line=583 (Interpreted 
> frame)
>  - java.sql.DriverManager.<clinit>() @bci=32, line=101 (Interpreted frame)
>  - 
> org.apache.phoenix.mapreduce.util.ConnectionUtil.getConnection(java.lang.String,
>  java.lang.Integer, java.lang.String, java.util.Properties) @bci=12, line=98 
> (Interpreted frame)
>  - 
> org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(org.apache.hadoop.conf.Configuration,
>  java.util.Properties) @bci=22, line=57 (Interpreted frame)
>  - 
> org.apache.phoenix.mapreduce.PhoenixInputFormat.getQueryPlan(org.apache.hadoop.mapreduce.JobContext,
>  org.apache.hadoop.conf.Configuration) @bci=61, line=116 (Interpreted frame)
>  - 
> org.apache.phoenix.mapreduce.PhoenixInputFormat.createRecordReader(org.apache.hadoop.mapreduce.InputSplit,
>  org.apache.hadoop.mapreduce.TaskAttemptContext) @bci=10, line=71 
> (Interpreted frame)
>  - 
> org.apache.spark.rdd.NewHadoopRDD$$anon$1.<init>(org.apache.spark.rdd.NewHadoopRDD,
>  org.apache.spark.Partition, org.apache.spark.TaskContext) @bci=233, line=156 
> (Interpreted frame)
> Thread 9170: (state = BLOCKED)
>  - org.apache.phoenix.jdbc.PhoenixDriver.<clinit>() @bci=35, line=125 
> (Interpreted frame)
>  - 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(java.lang.reflect.Constructor,
>  java.lang.Object[]) @bci=0 (Compiled frame)
>  - sun.reflect.NativeConstructorAccessorImpl.newInstance(java.lang.Object[]) 
> @bci=85, line=62 (Compiled frame)
>  - 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(java.lang.Object[]) 
> @bci=5, line=45 (Compiled frame)
>  - java.lang.reflect.Constructor.newInstance(java.lang.Object[]) @bci=79, 
> line=423 (Compiled frame)
>  - java.lang.Class.newInstance() @bci=138, line=442 (Compiled frame)
>  - 
> org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.register(java.lang.String)
>  @bci=89, line=46 (Interpreted frame)
>  - 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$2.apply()
>  @bci=7, line=53 (Interpreted frame)
>  - 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$2.apply()
>  @bci=1, line=52 (Interpreted frame)
>  - 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.<init>(org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD,
>  org.apache.spark.Partition, org.apache.spark.TaskContext) @bci=81, line=347 
> (Interpreted frame)
>  - 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(org.apache.spark.Partition,
>  org.apache.spark.TaskContext) @bci=7, line=339 (Interpreted frame)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to