[ 
https://issues.apache.org/jira/browse/SPARK-31340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon updated SPARK-31340:
---------------------------------
    Description: 
Adding  UI filter AuthenticationFilter (from Hadoop) causes Spark application 
to never end, due to threads created in this class not interrupted.

*To reproduce*

Start a local spark context with hadoop-auth 3.1.0 
{{spark.ui.enabled=true}}
{{spark.ui.filters=org.apache.hadoop.security.authentication.server.AuthenticationFilter}}
{{#and all required ldap props}}
{{spark.org.apache.hadoop.security.authentication.server.AuthenticationFilter.param.ldap.*=...}}

*What's happening :*

In [AuthenticationFilter's 
|https://github.com/apache/hadoop/blob/branch-3.1/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java]
 init we have the following chain:

{{(line.178) initializeSecretProvider(filterConfig);}}
{{(}}{{line.209) secretProvider = constructSecretProvider(...)}}
{{(}}{{line 237) provider.init(config, ctx, validity);}}

If no config is specified provider will be [RolloverSignerSecretProvider 
|https://github.com/apache/hadoop/blob/branch-3.1/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/RolloverSignerSecretProvider.java]which
 will (line 95) start a new thread via
{{scheduler = Executors.newSingleThreadScheduledExecutor();}}

The created thread will be stopped in destroy() method (line 106).

*Unfortunately, this destroy() method is not called* when SparkHistory is 
closed, leaving threads running.

 

This ticket is not here to address the particular case of Hadoop's 
authentication filter, but to ensure that any Filter added in spark.ui will 
have its destroy() method called.

 

 

 

  was:
Adding  UI filter AuthenticationFilter (from Hadoop) causes Spark application 
to never end, due to threads created in this class not interrupted.

*To reproduce*

Start a local spark context with hadoop-auth 3.1.0 
{{spark.ui.enabled=true}}
{{spark.ui.filters=org.apache.hadoop.security.authentication.server.AuthenticationFilter}}
{{#and all required ldap props}}
{{ 
spark.org.apache.hadoop.security.authentication.server.AuthenticationFilter.param.ldap.*=...}}

*What's happening :*

In [AuthenticationFilter's 
|https://github.com/apache/hadoop/blob/branch-3.1/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java]
 init we have the following chain:

{{(line.178) initializeSecretProvider(filterConfig);}}
{{(}}{{line.209) secretProvider = constructSecretProvider(...)}}
{{(}}{{line 237) provider.init(config, ctx, validity);}}

If no config is specified provider will be [RolloverSignerSecretProvider 
|https://github.com/apache/hadoop/blob/branch-3.1/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/RolloverSignerSecretProvider.java]which
 will (line 95) start a new thread via
{{scheduler = Executors.newSingleThreadScheduledExecutor();}}

The created thread will be stopped in destroy() method (line 106).

*Unfortunately, this destroy() method is not called* when SparkHistory is 
closed, leaving threads running.

 

This ticket is not here to address the particular case of Hadoop's 
authentication filter, but to ensure that any Filter added in spark.ui will 
have its destroy() method called.

 

 

 


> No call to destroy() for filter in SparkHistory
> -----------------------------------------------
>
>                 Key: SPARK-31340
>                 URL: https://issues.apache.org/jira/browse/SPARK-31340
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.4.5
>            Reporter: thierry accart
>            Priority: Major
>
> Adding  UI filter AuthenticationFilter (from Hadoop) causes Spark application 
> to never end, due to threads created in this class not interrupted.
> *To reproduce*
> Start a local spark context with hadoop-auth 3.1.0 
> {{spark.ui.enabled=true}}
> {{spark.ui.filters=org.apache.hadoop.security.authentication.server.AuthenticationFilter}}
> {{#and all required ldap props}}
> {{spark.org.apache.hadoop.security.authentication.server.AuthenticationFilter.param.ldap.*=...}}
> *What's happening :*
> In [AuthenticationFilter's 
> |https://github.com/apache/hadoop/blob/branch-3.1/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java]
>  init we have the following chain:
> {{(line.178) initializeSecretProvider(filterConfig);}}
> {{(}}{{line.209) secretProvider = constructSecretProvider(...)}}
> {{(}}{{line 237) provider.init(config, ctx, validity);}}
> If no config is specified provider will be [RolloverSignerSecretProvider 
> |https://github.com/apache/hadoop/blob/branch-3.1/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/RolloverSignerSecretProvider.java]which
>  will (line 95) start a new thread via
> {{scheduler = Executors.newSingleThreadScheduledExecutor();}}
> The created thread will be stopped in destroy() method (line 106).
> *Unfortunately, this destroy() method is not called* when SparkHistory is 
> closed, leaving threads running.
>  
> This ticket is not here to address the particular case of Hadoop's 
> authentication filter, but to ensure that any Filter added in spark.ui will 
> have its destroy() method called.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to