[jira] [Updated] (HDFS-1820) FTPFileSystem attempts to close the outputstream even when it is not initialised

2020-04-13 Thread Mikhail Pryakhin (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-1820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Pryakhin updated HDFS-1820:
---
Attachment: HDFS-1820.002.patch
Status: Patch Available  (was: In Progress)

> FTPFileSystem attempts to close the outputstream even when it is not 
> initialised
> 
>
> Key: HDFS-1820
> URL: https://issues.apache.org/jira/browse/HDFS-1820
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 0.20.1
> Environment: occurs on all platforms
>Reporter: Sudharsan Sampath
>Assignee: Mikhail Pryakhin
>Priority: Major
>  Labels: hadoop
> Attachments: HDFS-1820.001.patch, HDFS-1820.002.patch
>
>
> FTPFileSystem's create method attempts to close the outputstream even when it 
> is not initialized causing a null pointer exception. In our case the apache 
> commons FTPClient was not able to create the destination file due to 
> permissions issue. The FtpClient promptly reported a 553 : Permissions issue 
> but it was overlooked in FTPFileSystem create method. 
> The following code fails
> if (!FTPReply.isPositivePreliminary(client.getReplyCode())) {
>   // The ftpClient is an inconsistent state. Must close the stream
>   // which in turn will logout and disconnect from FTP server
>   fos.close();
>   throw new IOException("Unable to create file: " + file + ", Aborting");
> }
> as 'fos' is null. As a result the proper error message "Unable to create file 
> XXX" is not reported but rather a null pointer exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-1820) FTPFileSystem attempts to close the outputstream even when it is not initialised

2020-04-12 Thread Mikhail Pryakhin (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-1820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Pryakhin updated HDFS-1820:
---
Status: Open  (was: Patch Available)

Instead of manually uploading patch I'll provide a link to the github patch as 
described 
[here|[https://yetus.apache.org/documentation/in-progress/precommit-patchnames/]]

> FTPFileSystem attempts to close the outputstream even when it is not 
> initialised
> 
>
> Key: HDFS-1820
> URL: https://issues.apache.org/jira/browse/HDFS-1820
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 0.20.1
> Environment: occurs on all platforms
>Reporter: Sudharsan Sampath
>Assignee: Mikhail Pryakhin
>Priority: Major
>  Labels: hadoop
> Attachments: HDFS-1820.001.patch
>
>
> FTPFileSystem's create method attempts to close the outputstream even when it 
> is not initialized causing a null pointer exception. In our case the apache 
> commons FTPClient was not able to create the destination file due to 
> permissions issue. The FtpClient promptly reported a 553 : Permissions issue 
> but it was overlooked in FTPFileSystem create method. 
> The following code fails
> if (!FTPReply.isPositivePreliminary(client.getReplyCode())) {
>   // The ftpClient is an inconsistent state. Must close the stream
>   // which in turn will logout and disconnect from FTP server
>   fos.close();
>   throw new IOException("Unable to create file: " + file + ", Aborting");
> }
> as 'fos' is null. As a result the proper error message "Unable to create file 
> XXX" is not reported but rather a null pointer exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-1820) FTPFileSystem attempts to close the outputstream even when it is not initialised

2020-04-12 Thread Mikhail Pryakhin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-1820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17081729#comment-17081729
 ] 

Mikhail Pryakhin commented on HDFS-1820:


a patch is available here

[https://github.com/apache/hadoop/pull/1952.patch]

> FTPFileSystem attempts to close the outputstream even when it is not 
> initialised
> 
>
> Key: HDFS-1820
> URL: https://issues.apache.org/jira/browse/HDFS-1820
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 0.20.1
> Environment: occurs on all platforms
>Reporter: Sudharsan Sampath
>Assignee: Mikhail Pryakhin
>Priority: Major
>  Labels: hadoop
> Attachments: HDFS-1820.001.patch
>
>
> FTPFileSystem's create method attempts to close the outputstream even when it 
> is not initialized causing a null pointer exception. In our case the apache 
> commons FTPClient was not able to create the destination file due to 
> permissions issue. The FtpClient promptly reported a 553 : Permissions issue 
> but it was overlooked in FTPFileSystem create method. 
> The following code fails
> if (!FTPReply.isPositivePreliminary(client.getReplyCode())) {
>   // The ftpClient is an inconsistent state. Must close the stream
>   // which in turn will logout and disconnect from FTP server
>   fos.close();
>   throw new IOException("Unable to create file: " + file + ", Aborting");
> }
> as 'fos' is null. As a result the proper error message "Unable to create file 
> XXX" is not reported but rather a null pointer exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-1820) FTPFileSystem attempts to close the outputstream even when it is not initialised

2020-04-12 Thread Mikhail Pryakhin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-1820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17081729#comment-17081729
 ] 

Mikhail Pryakhin edited comment on HDFS-1820 at 4/12/20, 10:14 AM:
---

the patch is available here

[https://github.com/apache/hadoop/pull/1952.patch]


was (Author: m.pryahin):
a patch is available here

[https://github.com/apache/hadoop/pull/1952.patch]

> FTPFileSystem attempts to close the outputstream even when it is not 
> initialised
> 
>
> Key: HDFS-1820
> URL: https://issues.apache.org/jira/browse/HDFS-1820
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 0.20.1
> Environment: occurs on all platforms
>Reporter: Sudharsan Sampath
>Assignee: Mikhail Pryakhin
>Priority: Major
>  Labels: hadoop
> Attachments: HDFS-1820.001.patch
>
>
> FTPFileSystem's create method attempts to close the outputstream even when it 
> is not initialized causing a null pointer exception. In our case the apache 
> commons FTPClient was not able to create the destination file due to 
> permissions issue. The FtpClient promptly reported a 553 : Permissions issue 
> but it was overlooked in FTPFileSystem create method. 
> The following code fails
> if (!FTPReply.isPositivePreliminary(client.getReplyCode())) {
>   // The ftpClient is an inconsistent state. Must close the stream
>   // which in turn will logout and disconnect from FTP server
>   fos.close();
>   throw new IOException("Unable to create file: " + file + ", Aborting");
> }
> as 'fos' is null. As a result the proper error message "Unable to create file 
> XXX" is not reported but rather a null pointer exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-1820) FTPFileSystem attempts to close the outputstream even when it is not initialised

2020-04-10 Thread Mikhail Pryakhin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-1820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080935#comment-17080935
 ] 

Mikhail Pryakhin edited comment on HDFS-1820 at 4/10/20, 8:23 PM:
--

Fixed the issue when FTPFileSystem attempted to close the outputstream even 
when it is not initialised:
 * Making sure an underlying outputstream is successfully created by 
apache-commons FTPClient before wrapping it with FSDataOutputStream.
 * Gracefully release resources when a destination file can't be created due to 
lack of permissions.


was (Author: m.pryahin):
* Making sure an underlying outputstream is successfully created by 
apache-commons FTPClient before wrapping it with FSDataOutputStream.
 * Gracefully release resources when a destination file can't be created due to 
lack of permissions.

> FTPFileSystem attempts to close the outputstream even when it is not 
> initialised
> 
>
> Key: HDFS-1820
> URL: https://issues.apache.org/jira/browse/HDFS-1820
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 0.20.1
> Environment: occurs on all platforms
>Reporter: Sudharsan Sampath
>Assignee: Mikhail Pryakhin
>Priority: Major
>  Labels: hadoop
> Attachments: HDFS-1820.001.patch
>
>
> FTPFileSystem's create method attempts to close the outputstream even when it 
> is not initialized causing a null pointer exception. In our case the apache 
> commons FTPClient was not able to create the destination file due to 
> permissions issue. The FtpClient promptly reported a 553 : Permissions issue 
> but it was overlooked in FTPFileSystem create method. 
> The following code fails
> if (!FTPReply.isPositivePreliminary(client.getReplyCode())) {
>   // The ftpClient is an inconsistent state. Must close the stream
>   // which in turn will logout and disconnect from FTP server
>   fos.close();
>   throw new IOException("Unable to create file: " + file + ", Aborting");
> }
> as 'fos' is null. As a result the proper error message "Unable to create file 
> XXX" is not reported but rather a null pointer exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-1820) FTPFileSystem attempts to close the outputstream even when it is not initialised

2020-04-10 Thread Mikhail Pryakhin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-1820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080935#comment-17080935
 ] 

Mikhail Pryakhin edited comment on HDFS-1820 at 4/10/20, 8:23 PM:
--

Fixed the issue when FTPFileSystem attempted to close the outputstream even 
when it was not initialised:
 * Making sure an underlying outputstream is successfully created by 
apache-commons FTPClient before wrapping it with FSDataOutputStream.
 * Gracefully release resources when a destination file can't be created due to 
lack of permissions.


was (Author: m.pryahin):
Fixed the issue when FTPFileSystem attempted to close the outputstream even 
when it is not initialised:
 * Making sure an underlying outputstream is successfully created by 
apache-commons FTPClient before wrapping it with FSDataOutputStream.
 * Gracefully release resources when a destination file can't be created due to 
lack of permissions.

> FTPFileSystem attempts to close the outputstream even when it is not 
> initialised
> 
>
> Key: HDFS-1820
> URL: https://issues.apache.org/jira/browse/HDFS-1820
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 0.20.1
> Environment: occurs on all platforms
>Reporter: Sudharsan Sampath
>Assignee: Mikhail Pryakhin
>Priority: Major
>  Labels: hadoop
> Attachments: HDFS-1820.001.patch
>
>
> FTPFileSystem's create method attempts to close the outputstream even when it 
> is not initialized causing a null pointer exception. In our case the apache 
> commons FTPClient was not able to create the destination file due to 
> permissions issue. The FtpClient promptly reported a 553 : Permissions issue 
> but it was overlooked in FTPFileSystem create method. 
> The following code fails
> if (!FTPReply.isPositivePreliminary(client.getReplyCode())) {
>   // The ftpClient is an inconsistent state. Must close the stream
>   // which in turn will logout and disconnect from FTP server
>   fos.close();
>   throw new IOException("Unable to create file: " + file + ", Aborting");
> }
> as 'fos' is null. As a result the proper error message "Unable to create file 
> XXX" is not reported but rather a null pointer exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-1820) FTPFileSystem attempts to close the outputstream even when it is not initialised

2020-04-09 Thread Mikhail Pryakhin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-1820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17079260#comment-17079260
 ] 

Mikhail Pryakhin commented on HDFS-1820:


Hello, I've just stumbled across this issue, I've reproduced it with unit tests 
and have fixed it. I'd like to take over this issue and contribute a patch.

> FTPFileSystem attempts to close the outputstream even when it is not 
> initialised
> 
>
> Key: HDFS-1820
> URL: https://issues.apache.org/jira/browse/HDFS-1820
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 0.20.1
> Environment: occurs on all platforms
>Reporter: Sudharsan Sampath
>Priority: Major
>  Labels: hadoop
>
> FTPFileSystem's create method attempts to close the outputstream even when it 
> is not initialized causing a null pointer exception. In our case the apache 
> commons FTPClient was not able to create the destination file due to 
> permissions issue. The FtpClient promptly reported a 553 : Permissions issue 
> but it was overlooked in FTPFileSystem create method. 
> The following code fails
> if (!FTPReply.isPositivePreliminary(client.getReplyCode())) {
>   // The ftpClient is an inconsistent state. Must close the stream
>   // which in turn will logout and disconnect from FTP server
>   fos.close();
>   throw new IOException("Unable to create file: " + file + ", Aborting");
> }
> as 'fos' is null. As a result the proper error message "Unable to create file 
> XXX" is not reported but rather a null pointer exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-1820) FTPFileSystem attempts to close the outputstream even when it is not initialised

2020-04-10 Thread Mikhail Pryakhin (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-1820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Pryakhin updated HDFS-1820:
---
Attachment: HDFS-1820.001.patch
Status: Patch Available  (was: In Progress)

* Making sure an underlying outputstream is successfully created by 
apache-commons FTPClient before wrapping it with FSDataOutputStream.
 * Gracefully release resources when a destination file can't be created due to 
lack of permissions.

> FTPFileSystem attempts to close the outputstream even when it is not 
> initialised
> 
>
> Key: HDFS-1820
> URL: https://issues.apache.org/jira/browse/HDFS-1820
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 0.20.1
> Environment: occurs on all platforms
>Reporter: Sudharsan Sampath
>Assignee: Mikhail Pryakhin
>Priority: Major
>  Labels: hadoop
> Attachments: HDFS-1820.001.patch
>
>
> FTPFileSystem's create method attempts to close the outputstream even when it 
> is not initialized causing a null pointer exception. In our case the apache 
> commons FTPClient was not able to create the destination file due to 
> permissions issue. The FtpClient promptly reported a 553 : Permissions issue 
> but it was overlooked in FTPFileSystem create method. 
> The following code fails
> if (!FTPReply.isPositivePreliminary(client.getReplyCode())) {
>   // The ftpClient is an inconsistent state. Must close the stream
>   // which in turn will logout and disconnect from FTP server
>   fos.close();
>   throw new IOException("Unable to create file: " + file + ", Aborting");
> }
> as 'fos' is null. As a result the proper error message "Unable to create file 
> XXX" is not reported but rather a null pointer exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-1820) FTPFileSystem attempts to close the outputstream even when it is not initialised

2020-04-10 Thread Mikhail Pryakhin (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-1820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-1820 started by Mikhail Pryakhin.
--
> FTPFileSystem attempts to close the outputstream even when it is not 
> initialised
> 
>
> Key: HDFS-1820
> URL: https://issues.apache.org/jira/browse/HDFS-1820
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 0.20.1
> Environment: occurs on all platforms
>Reporter: Sudharsan Sampath
>Assignee: Mikhail Pryakhin
>Priority: Major
>  Labels: hadoop
>
> FTPFileSystem's create method attempts to close the outputstream even when it 
> is not initialized causing a null pointer exception. In our case the apache 
> commons FTPClient was not able to create the destination file due to 
> permissions issue. The FtpClient promptly reported a 553 : Permissions issue 
> but it was overlooked in FTPFileSystem create method. 
> The following code fails
> if (!FTPReply.isPositivePreliminary(client.getReplyCode())) {
>   // The ftpClient is an inconsistent state. Must close the stream
>   // which in turn will logout and disconnect from FTP server
>   fos.close();
>   throw new IOException("Unable to create file: " + file + ", Aborting");
> }
> as 'fos' is null. As a result the proper error message "Unable to create file 
> XXX" is not reported but rather a null pointer exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-1820) FTPFileSystem attempts to close the outputstream even when it is not initialised

2020-04-13 Thread Mikhail Pryakhin (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-1820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-1820 started by Mikhail Pryakhin.
--
> FTPFileSystem attempts to close the outputstream even when it is not 
> initialised
> 
>
> Key: HDFS-1820
> URL: https://issues.apache.org/jira/browse/HDFS-1820
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 0.20.1
> Environment: occurs on all platforms
>Reporter: Sudharsan Sampath
>Assignee: Mikhail Pryakhin
>Priority: Major
>  Labels: hadoop
> Attachments: HDFS-1820.001.patch
>
>
> FTPFileSystem's create method attempts to close the outputstream even when it 
> is not initialized causing a null pointer exception. In our case the apache 
> commons FTPClient was not able to create the destination file due to 
> permissions issue. The FtpClient promptly reported a 553 : Permissions issue 
> but it was overlooked in FTPFileSystem create method. 
> The following code fails
> if (!FTPReply.isPositivePreliminary(client.getReplyCode())) {
>   // The ftpClient is an inconsistent state. Must close the stream
>   // which in turn will logout and disconnect from FTP server
>   fos.close();
>   throw new IOException("Unable to create file: " + file + ", Aborting");
> }
> as 'fos' is null. As a result the proper error message "Unable to create file 
> XXX" is not reported but rather a null pointer exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-1820) FTPFileSystem attempts to close the outputstream even when it is not initialised

2020-04-30 Thread Mikhail Pryakhin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-1820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17096727#comment-17096727
 ] 

Mikhail Pryakhin commented on HDFS-1820:


yes, will do, nothing is pending here.

thank you.

> FTPFileSystem attempts to close the outputstream even when it is not 
> initialised
> 
>
> Key: HDFS-1820
> URL: https://issues.apache.org/jira/browse/HDFS-1820
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 0.20.1
> Environment: occurs on all platforms
>Reporter: Sudharsan Sampath
>Assignee: Mikhail Pryakhin
>Priority: Major
>  Labels: hadoop
> Fix For: 3.3.1
>
> Attachments: HDFS-1820.001.patch, HDFS-1820.002.patch
>
>
> FTPFileSystem's create method attempts to close the outputstream even when it 
> is not initialized causing a null pointer exception. In our case the apache 
> commons FTPClient was not able to create the destination file due to 
> permissions issue. The FtpClient promptly reported a 553 : Permissions issue 
> but it was overlooked in FTPFileSystem create method. 
> The following code fails
> if (!FTPReply.isPositivePreliminary(client.getReplyCode())) {
>   // The ftpClient is an inconsistent state. Must close the stream
>   // which in turn will logout and disconnect from FTP server
>   fos.close();
>   throw new IOException("Unable to create file: " + file + ", Aborting");
> }
> as 'fos' is null. As a result the proper error message "Unable to create file 
> XXX" is not reported but rather a null pointer exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-1820) FTPFileSystem attempts to close the outputstream even when it is not initialised

2020-04-30 Thread Mikhail Pryakhin (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-1820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Pryakhin updated HDFS-1820:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> FTPFileSystem attempts to close the outputstream even when it is not 
> initialised
> 
>
> Key: HDFS-1820
> URL: https://issues.apache.org/jira/browse/HDFS-1820
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 0.20.1
> Environment: occurs on all platforms
>Reporter: Sudharsan Sampath
>Assignee: Mikhail Pryakhin
>Priority: Major
>  Labels: hadoop
> Fix For: 3.3.1
>
> Attachments: HDFS-1820.001.patch, HDFS-1820.002.patch
>
>
> FTPFileSystem's create method attempts to close the outputstream even when it 
> is not initialized causing a null pointer exception. In our case the apache 
> commons FTPClient was not able to create the destination file due to 
> permissions issue. The FtpClient promptly reported a 553 : Permissions issue 
> but it was overlooked in FTPFileSystem create method. 
> The following code fails
> if (!FTPReply.isPositivePreliminary(client.getReplyCode())) {
>   // The ftpClient is an inconsistent state. Must close the stream
>   // which in turn will logout and disconnect from FTP server
>   fos.close();
>   throw new IOException("Unable to create file: " + file + ", Aborting");
> }
> as 'fos' is null. As a result the proper error message "Unable to create file 
> XXX" is not reported but rather a null pointer exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-1820) FTPFileSystem attempts to close the outputstream even when it is not initialised

2020-05-11 Thread Mikhail Pryakhin (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-1820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Pryakhin resolved HDFS-1820.

Resolution: Fixed

a regression introduced in this patch will be fixed as 
https://issues.apache.org/jira/browse/HADOOP-17036

> FTPFileSystem attempts to close the outputstream even when it is not 
> initialised
> 
>
> Key: HDFS-1820
> URL: https://issues.apache.org/jira/browse/HDFS-1820
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 0.20.1
> Environment: occurs on all platforms
>Reporter: Sudharsan Sampath
>Assignee: Mikhail Pryakhin
>Priority: Major
>  Labels: hadoop
> Fix For: 3.3.1
>
> Attachments: HDFS-1820.001.patch, HDFS-1820.002.patch
>
>
> FTPFileSystem's create method attempts to close the outputstream even when it 
> is not initialized causing a null pointer exception. In our case the apache 
> commons FTPClient was not able to create the destination file due to 
> permissions issue. The FtpClient promptly reported a 553 : Permissions issue 
> but it was overlooked in FTPFileSystem create method. 
> The following code fails
> if (!FTPReply.isPositivePreliminary(client.getReplyCode())) {
>   // The ftpClient is an inconsistent state. Must close the stream
>   // which in turn will logout and disconnect from FTP server
>   fos.close();
>   throw new IOException("Unable to create file: " + file + ", Aborting");
> }
> as 'fos' is null. As a result the proper error message "Unable to create file 
> XXX" is not reported but rather a null pointer exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-1820) FTPFileSystem attempts to close the outputstream even when it is not initialised

2020-05-11 Thread Mikhail Pryakhin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-1820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104460#comment-17104460
 ] 

Mikhail Pryakhin commented on HDFS-1820:


hi Steve, sure! will provide a fix shortly, my apologies

> FTPFileSystem attempts to close the outputstream even when it is not 
> initialised
> 
>
> Key: HDFS-1820
> URL: https://issues.apache.org/jira/browse/HDFS-1820
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 0.20.1
> Environment: occurs on all platforms
>Reporter: Sudharsan Sampath
>Assignee: Mikhail Pryakhin
>Priority: Major
>  Labels: hadoop
> Fix For: 3.3.1
>
> Attachments: HDFS-1820.001.patch, HDFS-1820.002.patch
>
>
> FTPFileSystem's create method attempts to close the outputstream even when it 
> is not initialized causing a null pointer exception. In our case the apache 
> commons FTPClient was not able to create the destination file due to 
> permissions issue. The FtpClient promptly reported a 553 : Permissions issue 
> but it was overlooked in FTPFileSystem create method. 
> The following code fails
> if (!FTPReply.isPositivePreliminary(client.getReplyCode())) {
>   // The ftpClient is an inconsistent state. Must close the stream
>   // which in turn will logout and disconnect from FTP server
>   fos.close();
>   throw new IOException("Unable to create file: " + file + ", Aborting");
> }
> as 'fos' is null. As a result the proper error message "Unable to create file 
> XXX" is not reported but rather a null pointer exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-1820) FTPFileSystem attempts to close the outputstream even when it is not initialised

2020-05-11 Thread Mikhail Pryakhin (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-1820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Pryakhin reopened HDFS-1820:


looks like this is causing HADOOP-17036.

> FTPFileSystem attempts to close the outputstream even when it is not 
> initialised
> 
>
> Key: HDFS-1820
> URL: https://issues.apache.org/jira/browse/HDFS-1820
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 0.20.1
> Environment: occurs on all platforms
>Reporter: Sudharsan Sampath
>Assignee: Mikhail Pryakhin
>Priority: Major
>  Labels: hadoop
> Fix For: 3.3.1
>
> Attachments: HDFS-1820.001.patch, HDFS-1820.002.patch
>
>
> FTPFileSystem's create method attempts to close the outputstream even when it 
> is not initialized causing a null pointer exception. In our case the apache 
> commons FTPClient was not able to create the destination file due to 
> permissions issue. The FtpClient promptly reported a 553 : Permissions issue 
> but it was overlooked in FTPFileSystem create method. 
> The following code fails
> if (!FTPReply.isPositivePreliminary(client.getReplyCode())) {
>   // The ftpClient is an inconsistent state. Must close the stream
>   // which in turn will logout and disconnect from FTP server
>   fos.close();
>   throw new IOException("Unable to create file: " + file + ", Aborting");
> }
> as 'fos' is null. As a result the proper error message "Unable to create file 
> XXX" is not reported but rather a null pointer exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15202) HDFS-client: boost ShortCircuit Cache

2020-05-18 Thread Mikhail Pryakhin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110419#comment-17110419
 ] 

Mikhail Pryakhin edited comment on HDFS-15202 at 5/18/20, 7:36 PM:
---

[~weichiu] it seems that this patch breaks test compilation in trunk


{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-hdfs: Compilation failure: Compilation 
failure:
[ERROR] 
/home/vagrant/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java:[244,48]
 error: ')' expected
[ERROR] 
/home/vagrant/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java:[244,53]
 error: illegal start of expression
[ERROR] 
/home/vagrant/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java:[244,54]
 error: ';' expected
[ERROR] 
/home/vagrant/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java:[245,14]
 error: not a statement
[ERROR] 
/home/vagrant/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java:[245,22]
 error: ';' expected
[ERROR] 
/home/vagrant/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java:[245,39]
 error:  expected
[ERROR] 
/home/vagrant/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java:[245,41]
 error: illegal start of expression
{code}



was (Author: m.pryahin):
[~weichiu] it seems that this patch breaks test compilation at trunk


{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-hdfs: Compilation failure: Compilation 
failure:
[ERROR] 
/home/vagrant/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java:[244,48]
 error: ')' expected
[ERROR] 
/home/vagrant/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java:[244,53]
 error: illegal start of expression
[ERROR] 
/home/vagrant/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java:[244,54]
 error: ';' expected
[ERROR] 
/home/vagrant/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java:[245,14]
 error: not a statement
[ERROR] 
/home/vagrant/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java:[245,22]
 error: ';' expected
[ERROR] 
/home/vagrant/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java:[245,39]
 error:  expected
[ERROR] 
/home/vagrant/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java:[245,41]
 error: illegal start of expression
{code}


> HDFS-client: boost ShortCircuit Cache
> -
>
> Key: HDFS-15202
> URL: https://issues.apache.org/jira/browse/HDFS-15202
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
> Environment: 4 nodes E5-2698 v4 @ 2.20GHz, 700 Gb Mem.
> 8 RegionServers (2 by host)
> 8 tables by 64 regions by 1.88 Gb data in each = 900 Gb total
> Random read in 800 threads via YCSB and a little bit updates (10% of reads)
>Reporter: Danil Lipovoy
>Assignee: Danil Lipovoy
>Priority: Minor
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15202-Addendum-01.patch, HDFS_CPU_full_cycle.png, 
> cpu_SSC.png, cpu_SSC2.png, hdfs_cpu.png, hdfs_reads.png, hdfs_scc_3_test.png, 
> hdfs_scc_test_full-cycle.png, locks.png, requests_SSC.png
>
>
> ТотI want to propose how to improve reading performance HDFS-client. The 
> idea: create few instances ShortCircuit caches instead of one. 
> The key points:
> 1. Create array of caches (set by 
> clientShortCircuitNum=*dfs.client.short.circuit.num*, see in the pull 
> requests below):
> {code:java}
> private ClientContext(String name, DfsClientConf conf, Configuration config) {
> ...
> shortCircuitCache = new ShortCircuitCache[this.clientShortCircuitNum];
> for (int i = 0; i < this.clientShortCircuitNum; i++) {
>   this.shortCircuitCache[i] = ShortCircuitCache.fromConf(scConf);
> }
> {code}
> 2 Then divide blocks by caches:
> {code:java}
>   public ShortCircuitCache getShortCircuitCache(long idx) {
> return shortCircuitCache[(int) (idx % clientShortCircuitNum)];
>   }
> {code}
> 3. And how to call it:
> 

[jira] [Comment Edited] (HDFS-15202) HDFS-client: boost ShortCircuit Cache

2020-05-18 Thread Mikhail Pryakhin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110419#comment-17110419
 ] 

Mikhail Pryakhin edited comment on HDFS-15202 at 5/18/20, 4:15 PM:
---

[~weichiu] it seems that this patch breaks test compilation at trunk


{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-hdfs: Compilation failure: Compilation 
failure:
[ERROR] 
/home/vagrant/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java:[244,48]
 error: ')' expected
[ERROR] 
/home/vagrant/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java:[244,53]
 error: illegal start of expression
[ERROR] 
/home/vagrant/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java:[244,54]
 error: ';' expected
[ERROR] 
/home/vagrant/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java:[245,14]
 error: not a statement
[ERROR] 
/home/vagrant/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java:[245,22]
 error: ';' expected
[ERROR] 
/home/vagrant/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java:[245,39]
 error:  expected
[ERROR] 
/home/vagrant/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/client/impl/TestBlockReaderLocal.java:[245,41]
 error: illegal start of expression
{code}



was (Author: m.pryahin):
[~weichiu] it seems that this patch breaks test compilation at trunk

> HDFS-client: boost ShortCircuit Cache
> -
>
> Key: HDFS-15202
> URL: https://issues.apache.org/jira/browse/HDFS-15202
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
> Environment: 4 nodes E5-2698 v4 @ 2.20GHz, 700 Gb Mem.
> 8 RegionServers (2 by host)
> 8 tables by 64 regions by 1.88 Gb data in each = 900 Gb total
> Random read in 800 threads via YCSB and a little bit updates (10% of reads)
>Reporter: Danil Lipovoy
>Assignee: Danil Lipovoy
>Priority: Minor
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15202-Addendum-01.patch, HDFS_CPU_full_cycle.png, 
> cpu_SSC.png, cpu_SSC2.png, hdfs_cpu.png, hdfs_reads.png, hdfs_scc_3_test.png, 
> hdfs_scc_test_full-cycle.png, locks.png, requests_SSC.png
>
>
> ТотI want to propose how to improve reading performance HDFS-client. The 
> idea: create few instances ShortCircuit caches instead of one. 
> The key points:
> 1. Create array of caches (set by 
> clientShortCircuitNum=*dfs.client.short.circuit.num*, see in the pull 
> requests below):
> {code:java}
> private ClientContext(String name, DfsClientConf conf, Configuration config) {
> ...
> shortCircuitCache = new ShortCircuitCache[this.clientShortCircuitNum];
> for (int i = 0; i < this.clientShortCircuitNum; i++) {
>   this.shortCircuitCache[i] = ShortCircuitCache.fromConf(scConf);
> }
> {code}
> 2 Then divide blocks by caches:
> {code:java}
>   public ShortCircuitCache getShortCircuitCache(long idx) {
> return shortCircuitCache[(int) (idx % clientShortCircuitNum)];
>   }
> {code}
> 3. And how to call it:
> {code:java}
> ShortCircuitCache cache = 
> clientContext.getShortCircuitCache(block.getBlockId());
> {code}
> The last number of offset evenly distributed from 0 to 9 - that's why all 
> caches will full approximately the same.
> It is good for performance. Below the attachment, it is load test reading 
> HDFS via HBase where clientShortCircuitNum = 1 vs 3. We can see that 
> performance grows ~30%, CPU usage about +15%. 
> Hope it is interesting for someone.
> Ready to explain some unobvious things.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15202) HDFS-client: boost ShortCircuit Cache

2020-05-18 Thread Mikhail Pryakhin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110419#comment-17110419
 ] 

Mikhail Pryakhin commented on HDFS-15202:
-

[~weichiu] it seems that this patch breaks test compilation at trunk

> HDFS-client: boost ShortCircuit Cache
> -
>
> Key: HDFS-15202
> URL: https://issues.apache.org/jira/browse/HDFS-15202
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
> Environment: 4 nodes E5-2698 v4 @ 2.20GHz, 700 Gb Mem.
> 8 RegionServers (2 by host)
> 8 tables by 64 regions by 1.88 Gb data in each = 900 Gb total
> Random read in 800 threads via YCSB and a little bit updates (10% of reads)
>Reporter: Danil Lipovoy
>Assignee: Danil Lipovoy
>Priority: Minor
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15202-Addendum-01.patch, HDFS_CPU_full_cycle.png, 
> cpu_SSC.png, cpu_SSC2.png, hdfs_cpu.png, hdfs_reads.png, hdfs_scc_3_test.png, 
> hdfs_scc_test_full-cycle.png, locks.png, requests_SSC.png
>
>
> ТотI want to propose how to improve reading performance HDFS-client. The 
> idea: create few instances ShortCircuit caches instead of one. 
> The key points:
> 1. Create array of caches (set by 
> clientShortCircuitNum=*dfs.client.short.circuit.num*, see in the pull 
> requests below):
> {code:java}
> private ClientContext(String name, DfsClientConf conf, Configuration config) {
> ...
> shortCircuitCache = new ShortCircuitCache[this.clientShortCircuitNum];
> for (int i = 0; i < this.clientShortCircuitNum; i++) {
>   this.shortCircuitCache[i] = ShortCircuitCache.fromConf(scConf);
> }
> {code}
> 2 Then divide blocks by caches:
> {code:java}
>   public ShortCircuitCache getShortCircuitCache(long idx) {
> return shortCircuitCache[(int) (idx % clientShortCircuitNum)];
>   }
> {code}
> 3. And how to call it:
> {code:java}
> ShortCircuitCache cache = 
> clientContext.getShortCircuitCache(block.getBlockId());
> {code}
> The last number of offset evenly distributed from 0 to 9 - that's why all 
> caches will full approximately the same.
> It is good for performance. Below the attachment, it is load test reading 
> HDFS via HBase where clientShortCircuitNum = 1 vs 3. We can see that 
> performance grows ~30%, CPU usage about +15%. 
> Hope it is interesting for someone.
> Ready to explain some unobvious things.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org