[ 
https://issues.apache.org/jira/browse/HADOOP-18238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17542089#comment-17542089
 ] 

Daniel Carl Jones commented on HADOOP-18238:
--------------------------------------------

To clarify on this issue:
 * We only ever want to shutdown the list after the deleteOnExit in the 
FileSystem superclass call has completed successfully.
 * By moving the check down, we are ensuring this is the case. A second call to 
.close() will try to delete any remaining keys if a previous call did not exit 
successfully.

> Hadoop 3.3.1 SFTPFileSystem.close() method have problem
> -------------------------------------------------------
>
>                 Key: HADOOP-18238
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18238
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: common
>    Affects Versions: 3.3.1
>            Reporter: yi liu
>            Assignee: groot
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> @Override
> public void close() throws IOException {
> if (closed.getAndSet(true)) {
> return;
> }
> try {
> super.close();
> } finally {
> if (connectionPool != null) {
> connectionPool.shutdown();
> }
> }
> }
>  
> if  you  exe this method ,the  fs can not exec deleteOnExsist method,because 
> the fs is closed.
> 如果手动调用,sftp fs执行close方法关闭连接池,让jvm能正常退出,deleteOnExsist 
> 将因为fs已关闭无法执行成功。如果不关闭,则连接池不会释放,jvm不能退出。
> https://issues.apache.org/jira/browse/HADOOP-17528,这是3.2.0 sftpfilesystem的问题
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to