GitHub user dongjoon-hyun opened a pull request:

    https://github.com/apache/spark/pull/20563

    [SPARK-23186][SQL][BRANCH-2.2] Initialize DriverManager first before 
loading JDBC Drivers

    ## What changes were proposed in this pull request?
    
    Since some JDBC Drivers have class initialization code to call 
`DriverManager`, we need to initialize `DriverManager` first in order to avoid 
potential executor-side **deadlock** situations like the following (or 
[STORM-2527](https://issues.apache.org/jira/browse/STORM-2527)).
    
    ```
    Thread 9587: (state = BLOCKED)
     - 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(java.lang.reflect.Constructor,
 java.lang.Object[]) bci=0 (Compiled frame; information may be imprecise)
     - 
sun.reflect.NativeConstructorAccessorImpl.newInstance(java.lang.Object[]) 
bci=85, line=62 (Compiled frame)
     - 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(java.lang.Object[]) 
bci=5, line=45 (Compiled frame)
     - java.lang.reflect.Constructor.newInstance(java.lang.Object[]) bci=79, 
line=423 (Compiled frame)
     - java.lang.Class.newInstance() bci=138, line=442 (Compiled frame)
     - java.util.ServiceLoader$LazyIterator.nextService() bci=119, line=380 
(Interpreted frame)
     - java.util.ServiceLoader$LazyIterator.next() bci=11, line=404 
(Interpreted frame)
     - java.util.ServiceLoader$1.next() bci=37, line=480 (Interpreted frame)
     - java.sql.DriverManager$2.run() bci=21, line=603 (Interpreted frame)
     - java.sql.DriverManager$2.run() bci=1, line=583 (Interpreted frame)
     - 
java.security.AccessController.doPrivileged(java.security.PrivilegedAction) 
bci=0 (Compiled frame)
     - java.sql.DriverManager.loadInitialDrivers() bci=27, line=583 
(Interpreted frame)
     - java.sql.DriverManager.<clinit>() bci=32, line=101 (Interpreted frame)
     - 
org.apache.phoenix.mapreduce.util.ConnectionUtil.getConnection(java.lang.String,
 java.lang.Integer, java.lang.String, java.util.Properties) bci=12, line=98 
(Interpreted frame)
     - 
org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(org.apache.hadoop.conf.Configuration,
 java.util.Properties) bci=22, line=57 (Interpreted frame)
     - 
org.apache.phoenix.mapreduce.PhoenixInputFormat.getQueryPlan(org.apache.hadoop.mapreduce.JobContext,
 org.apache.hadoop.conf.Configuration) bci=61, line=116 (Interpreted frame)
     - 
org.apache.phoenix.mapreduce.PhoenixInputFormat.createRecordReader(org.apache.hadoop.mapreduce.InputSplit,
 org.apache.hadoop.mapreduce.TaskAttemptContext) bci=10, line=71 (Interpreted 
frame)
     - 
org.apache.spark.rdd.NewHadoopRDD$$anon$1.<init>(org.apache.spark.rdd.NewHadoopRDD,
 org.apache.spark.Partition, org.apache.spark.TaskContext) bci=233, line=156 
(Interpreted frame)
    
    Thread 9170: (state = BLOCKED)
     - org.apache.phoenix.jdbc.PhoenixDriver.<clinit>() bci=35, line=125 
(Interpreted frame)
     - 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(java.lang.reflect.Constructor,
 java.lang.Object[]) bci=0 (Compiled frame)
     - 
sun.reflect.NativeConstructorAccessorImpl.newInstance(java.lang.Object[]) 
bci=85, line=62 (Compiled frame)
     - 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(java.lang.Object[]) 
bci=5, line=45 (Compiled frame)
     - java.lang.reflect.Constructor.newInstance(java.lang.Object[]) bci=79, 
line=423 (Compiled frame)
     - java.lang.Class.newInstance() bci=138, line=442 (Compiled frame)
     - 
org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.register(java.lang.String)
 bci=89, line=46 (Interpreted frame)
     - 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$2.apply()
 bci=7, line=53 (Interpreted frame)
     - 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$2.apply()
 bci=1, line=52 (Interpreted frame)
     - 
org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.<init>(org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD,
 org.apache.spark.Partition, org.apache.spark.TaskContext) bci=81, line=347 
(Interpreted frame)
     - 
org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(org.apache.spark.Partition,
 org.apache.spark.TaskContext) bci=7, line=339 (Interpreted frame)
    ```
    
    ## How was this patch tested?
    
    N/A

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/dongjoon-hyun/spark SPARK-23186-2

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/20563.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #20563
    
----
commit aeb188cb05187ae61cb4084beae9234e9c4fb8f8
Author: Dongjoon Hyun <dongjoon@...>
Date:   2018-02-09T04:54:57Z

    [SPARK-23186][SQL][BRANCH-2.2] Initialize DriverManager first before 
loading JDBC Drivers
    
    Since some JDBC Drivers have class initialization code to call 
`DriverManager`, we need to initialize `DriverManager` first in order to avoid 
potential executor-side **deadlock** situations like the following (or 
[STORM-2527](https://issues.apache.org/jira/browse/STORM-2527)).
    
    ```
    Thread 9587: (state = BLOCKED)
     - 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(java.lang.reflect.Constructor,
 java.lang.Object[]) bci=0 (Compiled frame; information may be imprecise)
     - 
sun.reflect.NativeConstructorAccessorImpl.newInstance(java.lang.Object[]) 
bci=85, line=62 (Compiled frame)
     - 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(java.lang.Object[]) 
bci=5, line=45 (Compiled frame)
     - java.lang.reflect.Constructor.newInstance(java.lang.Object[]) bci=79, 
line=423 (Compiled frame)
     - java.lang.Class.newInstance() bci=138, line=442 (Compiled frame)
     - java.util.ServiceLoader$LazyIterator.nextService() bci=119, line=380 
(Interpreted frame)
     - java.util.ServiceLoader$LazyIterator.next() bci=11, line=404 
(Interpreted frame)
     - java.util.ServiceLoader$1.next() bci=37, line=480 (Interpreted frame)
     - java.sql.DriverManager$2.run() bci=21, line=603 (Interpreted frame)
     - java.sql.DriverManager$2.run() bci=1, line=583 (Interpreted frame)
     - 
java.security.AccessController.doPrivileged(java.security.PrivilegedAction) 
bci=0 (Compiled frame)
     - java.sql.DriverManager.loadInitialDrivers() bci=27, line=583 
(Interpreted frame)
     - java.sql.DriverManager.<clinit>() bci=32, line=101 (Interpreted frame)
     - 
org.apache.phoenix.mapreduce.util.ConnectionUtil.getConnection(java.lang.String,
 java.lang.Integer, java.lang.String, java.util.Properties) bci=12, line=98 
(Interpreted frame)
     - 
org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(org.apache.hadoop.conf.Configuration,
 java.util.Properties) bci=22, line=57 (Interpreted frame)
     - 
org.apache.phoenix.mapreduce.PhoenixInputFormat.getQueryPlan(org.apache.hadoop.mapreduce.JobContext,
 org.apache.hadoop.conf.Configuration) bci=61, line=116 (Interpreted frame)
     - 
org.apache.phoenix.mapreduce.PhoenixInputFormat.createRecordReader(org.apache.hadoop.mapreduce.InputSplit,
 org.apache.hadoop.mapreduce.TaskAttemptContext) bci=10, line=71 (Interpreted 
frame)
     - 
org.apache.spark.rdd.NewHadoopRDD$$anon$1.<init>(org.apache.spark.rdd.NewHadoopRDD,
 org.apache.spark.Partition, org.apache.spark.TaskContext) bci=233, line=156 
(Interpreted frame)
    
    Thread 9170: (state = BLOCKED)
     - org.apache.phoenix.jdbc.PhoenixDriver.<clinit>() bci=35, line=125 
(Interpreted frame)
     - 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(java.lang.reflect.Constructor,
 java.lang.Object[]) bci=0 (Compiled frame)
     - 
sun.reflect.NativeConstructorAccessorImpl.newInstance(java.lang.Object[]) 
bci=85, line=62 (Compiled frame)
     - 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(java.lang.Object[]) 
bci=5, line=45 (Compiled frame)
     - java.lang.reflect.Constructor.newInstance(java.lang.Object[]) bci=79, 
line=423 (Compiled frame)
     - java.lang.Class.newInstance() bci=138, line=442 (Compiled frame)
     - 
org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.register(java.lang.String)
 bci=89, line=46 (Interpreted frame)
     - 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$2.apply()
 bci=7, line=53 (Interpreted frame)
     - 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$2.apply()
 bci=1, line=52 (Interpreted frame)
     - 
org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.<init>(org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD,
 org.apache.spark.Partition, org.apache.spark.TaskContext) bci=81, line=347 
(Interpreted frame)
     - 
org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(org.apache.spark.Partition,
 org.apache.spark.TaskContext) bci=7, line=339 (Interpreted frame)
    ```
    
    N/A
    
    Author: Dongjoon Hyun <dongj...@apache.org>
    
    Closes #20359 from dongjoon-hyun/SPARK-23186.
    
    (cherry picked from commit 8cbcc33876c773722163b2259644037bbb259bd1)
    Signed-off-by: Wenchen Fan <wenc...@databricks.com>

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to