[jira] [Commented] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2020-05-11 Thread Virajith Jalaparti (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105080#comment-17105080
 ] 

Virajith Jalaparti commented on HADOOP-15565:
-

Hi [~umamaheswararao] - do you mind checking 
[^HADOOP-15565-branch-3.1.001.patch] and [^HADOOP-15565-branch-2.10.001.patch]? 
Straight forward but had minor conflicts when I backported. 

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.3.0, 3.2.2
>
> Attachments: HADOOP-15565-branch-2.10.001.patch, 
> HADOOP-15565-branch-3.1.001.patch, HADOOP-15565-branch-3.2.001.patch, 
> HADOOP-15565-branch-3.2.002.patch, HADOOP-15565.0001.patch, 
> HADOOP-15565.0002.patch, HADOOP-15565.0003.patch, HADOOP-15565.0004.patch, 
> HADOOP-15565.0005.patch, HADOOP-15565.0006.bak, HADOOP-15565.0006.patch, 
> HADOOP-15565.0007.patch, HADOOP-15565.0008.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2020-05-11 Thread Virajith Jalaparti (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HADOOP-15565:

Attachment: HADOOP-15565-branch-2.10.001.patch

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.3.0, 3.2.2
>
> Attachments: HADOOP-15565-branch-2.10.001.patch, 
> HADOOP-15565-branch-3.1.001.patch, HADOOP-15565-branch-3.2.001.patch, 
> HADOOP-15565-branch-3.2.002.patch, HADOOP-15565.0001.patch, 
> HADOOP-15565.0002.patch, HADOOP-15565.0003.patch, HADOOP-15565.0004.patch, 
> HADOOP-15565.0005.patch, HADOOP-15565.0006.bak, HADOOP-15565.0006.patch, 
> HADOOP-15565.0007.patch, HADOOP-15565.0008.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2020-05-11 Thread Virajith Jalaparti (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HADOOP-15565:

Status: Patch Available  (was: Open)

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.3.0, 3.2.2
>
> Attachments: HADOOP-15565-branch-2.10.001.patch, 
> HADOOP-15565-branch-3.1.001.patch, HADOOP-15565-branch-3.2.001.patch, 
> HADOOP-15565-branch-3.2.002.patch, HADOOP-15565.0001.patch, 
> HADOOP-15565.0002.patch, HADOOP-15565.0003.patch, HADOOP-15565.0004.patch, 
> HADOOP-15565.0005.patch, HADOOP-15565.0006.bak, HADOOP-15565.0006.patch, 
> HADOOP-15565.0007.patch, HADOOP-15565.0008.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2020-05-11 Thread Virajith Jalaparti (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HADOOP-15565:

Status: Open  (was: Patch Available)

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.3.0, 3.2.2
>
> Attachments: HADOOP-15565-branch-3.1.001.patch, 
> HADOOP-15565-branch-3.2.001.patch, HADOOP-15565-branch-3.2.002.patch, 
> HADOOP-15565.0001.patch, HADOOP-15565.0002.patch, HADOOP-15565.0003.patch, 
> HADOOP-15565.0004.patch, HADOOP-15565.0005.patch, HADOOP-15565.0006.bak, 
> HADOOP-15565.0006.patch, HADOOP-15565.0007.patch, HADOOP-15565.0008.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17024) ListStatus on ViewFS root (ls "/") should list the linkFallBack root (configured target root).

2020-05-11 Thread Uma Maheswara Rao G (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G reassigned HADOOP-17024:


Assignee: Abhishek Das

> ListStatus on ViewFS root (ls "/") should list the linkFallBack root 
> (configured target root).
> --
>
> Key: HADOOP-17024
> URL: https://issues.apache.org/jira/browse/HADOOP-17024
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, viewfs
>Affects Versions: 3.2.2
>Reporter: Uma Maheswara Rao G
>Assignee: Abhishek Das
>Priority: Major
>
> As part of the design doc HDFS-15289, [~sanjay.radia] and me discussed the 
> following scenarios when fallback enabled.
> *Behavior when fallback enabled:*
>    Assume FS trees and mount mappings like below:
>    mount link /a/b/c/d  → hdfs://nn1/a/b
>    mount link /a/p/q/r  → hdfs://nn2/a/b
>    fallback → hdfs://nn3/  $  /a/c
>                                                  /x/z
>  # Open(/x/y) then it goes to nn3 (fallback)      - WORKS
>  # Create(/x/foo) then foo is created in nn3 in dir /x   - WORKS
>  # ls /  should list   /a  /x .Today this does not work and IT IS A BUG!!! 
> Because it conflicts with the open(/x/y)
>  # Create /y  : fails  - also fails when not using  fallback  - WORKS
>  # Create /a/z : fails - also fails when not using  fallback - WORKS
>  # ls /a should list /b /p  as expected and will not show fallback in nn3 - 
> WORKS
>  
> This Jira will fix issue of #3. So, when fallback enabled it should show 
> merged ls view with mount links + fallback root. ( this will only be at root 
> level)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17024) ListStatus on ViewFS root (ls "/") should list the linkFallBack root (configured target root).

2020-05-11 Thread Uma Maheswara Rao G (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105064#comment-17105064
 ] 

Uma Maheswara Rao G commented on HADOOP-17024:
--

I have added you in contributors list and assigned this Jira to you. Thanks

> ListStatus on ViewFS root (ls "/") should list the linkFallBack root 
> (configured target root).
> --
>
> Key: HADOOP-17024
> URL: https://issues.apache.org/jira/browse/HADOOP-17024
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, viewfs
>Affects Versions: 3.2.2
>Reporter: Uma Maheswara Rao G
>Assignee: Abhishek Das
>Priority: Major
>
> As part of the design doc HDFS-15289, [~sanjay.radia] and me discussed the 
> following scenarios when fallback enabled.
> *Behavior when fallback enabled:*
>    Assume FS trees and mount mappings like below:
>    mount link /a/b/c/d  → hdfs://nn1/a/b
>    mount link /a/p/q/r  → hdfs://nn2/a/b
>    fallback → hdfs://nn3/  $  /a/c
>                                                  /x/z
>  # Open(/x/y) then it goes to nn3 (fallback)      - WORKS
>  # Create(/x/foo) then foo is created in nn3 in dir /x   - WORKS
>  # ls /  should list   /a  /x .Today this does not work and IT IS A BUG!!! 
> Because it conflicts with the open(/x/y)
>  # Create /y  : fails  - also fails when not using  fallback  - WORKS
>  # Create /a/z : fails - also fails when not using  fallback - WORKS
>  # ls /a should list /b /p  as expected and will not show fallback in nn3 - 
> WORKS
>  
> This Jira will fix issue of #3. So, when fallback enabled it should show 
> merged ls view with mount links + fallback root. ( this will only be at root 
> level)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17024) ListStatus on ViewFS root (ls "/") should list the linkFallBack root (configured target root).

2020-05-11 Thread Uma Maheswara Rao G (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105057#comment-17105057
 ] 

Uma Maheswara Rao G edited comment on HADOOP-17024 at 5/12/20, 4:43 AM:


Sure. Thank you for filing them. 
I left it unassigned for that purpose if some one wants to take it. Thanks for 
working on it.


was (Author: umamaheswararao):
Sure. I left it unassigned for that purpose if some one wants to take it. 
Thanks for working on it.

> ListStatus on ViewFS root (ls "/") should list the linkFallBack root 
> (configured target root).
> --
>
> Key: HADOOP-17024
> URL: https://issues.apache.org/jira/browse/HADOOP-17024
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, viewfs
>Affects Versions: 3.2.2
>Reporter: Uma Maheswara Rao G
>Priority: Major
>
> As part of the design doc HDFS-15289, [~sanjay.radia] and me discussed the 
> following scenarios when fallback enabled.
> *Behavior when fallback enabled:*
>    Assume FS trees and mount mappings like below:
>    mount link /a/b/c/d  → hdfs://nn1/a/b
>    mount link /a/p/q/r  → hdfs://nn2/a/b
>    fallback → hdfs://nn3/  $  /a/c
>                                                  /x/z
>  # Open(/x/y) then it goes to nn3 (fallback)      - WORKS
>  # Create(/x/foo) then foo is created in nn3 in dir /x   - WORKS
>  # ls /  should list   /a  /x .Today this does not work and IT IS A BUG!!! 
> Because it conflicts with the open(/x/y)
>  # Create /y  : fails  - also fails when not using  fallback  - WORKS
>  # Create /a/z : fails - also fails when not using  fallback - WORKS
>  # ls /a should list /b /p  as expected and will not show fallback in nn3 - 
> WORKS
>  
> This Jira will fix issue of #3. So, when fallback enabled it should show 
> merged ls view with mount links + fallback root. ( this will only be at root 
> level)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16916) ABFS: Delegation SAS generator for integration with Ranger

2020-05-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105060#comment-17105060
 ] 

Hadoop QA commented on HADOOP-16916:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
0s{color} | {color:blue} markdownlint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
49s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 2 
new + 9 unchanged - 0 fixed = 11 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
18s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1965/8/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/1965 |
| JIRA Issue | HADOOP-16916 |
| Optional Tests | dupname asflicense xml compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
| uname | Linux 887bc1c2afbc 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 

[jira] [Commented] (HADOOP-17024) ListStatus on ViewFS root (ls "/") should list the linkFallBack root (configured target root).

2020-05-11 Thread Uma Maheswara Rao G (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105057#comment-17105057
 ] 

Uma Maheswara Rao G commented on HADOOP-17024:
--

Sure. I left it unassigned for that purpose if some one wants to take it. 
Thanks for working on it.

> ListStatus on ViewFS root (ls "/") should list the linkFallBack root 
> (configured target root).
> --
>
> Key: HADOOP-17024
> URL: https://issues.apache.org/jira/browse/HADOOP-17024
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, viewfs
>Affects Versions: 3.2.2
>Reporter: Uma Maheswara Rao G
>Priority: Major
>
> As part of the design doc HDFS-15289, [~sanjay.radia] and me discussed the 
> following scenarios when fallback enabled.
> *Behavior when fallback enabled:*
>    Assume FS trees and mount mappings like below:
>    mount link /a/b/c/d  → hdfs://nn1/a/b
>    mount link /a/p/q/r  → hdfs://nn2/a/b
>    fallback → hdfs://nn3/  $  /a/c
>                                                  /x/z
>  # Open(/x/y) then it goes to nn3 (fallback)      - WORKS
>  # Create(/x/foo) then foo is created in nn3 in dir /x   - WORKS
>  # ls /  should list   /a  /x .Today this does not work and IT IS A BUG!!! 
> Because it conflicts with the open(/x/y)
>  # Create /y  : fails  - also fails when not using  fallback  - WORKS
>  # Create /a/z : fails - also fails when not using  fallback - WORKS
>  # ls /a should list /b /p  as expected and will not show fallback in nn3 - 
> WORKS
>  
> This Jira will fix issue of #3. So, when fallback enabled it should show 
> merged ls view with mount links + fallback root. ( this will only be at root 
> level)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16916) ABFS: Delegation SAS generator for integration with Ranger

2020-05-11 Thread Thomas Marqardt (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105027#comment-17105027
 ] 

Thomas Marqardt commented on HADOOP-16916:
--

Updated PR 1965 to address feedback from Sneha.  The configuration to set the 
REST version has been removed and the ordering of imports has been fixed.

All tests are passing against my accounts.

Hierarchical Namespace enabled:
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 63, Failures: 0, Errors: 0, Skipped: 0
Tests run: 432, Failures: 0, Errors: 0, Skipped: 41
Tests run: 206, Failures: 0, Errors: 0, Skipped: 24

Hierarchical Namespace disabled:
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 63, Failures: 0, Errors: 0, Skipped: 0
Tests run: 432, Failures: 0, Errors: 0, Skipped: 244
Tests run: 206, Failures: 0, Errors: 0, Skipped: 24

> ABFS: Delegation SAS generator for integration with Ranger
> --
>
> Key: HADOOP-16916
> URL: https://issues.apache.org/jira/browse/HADOOP-16916
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.2.1
>Reporter: Thomas Marqardt
>Assignee: Thomas Marqardt
>Priority: Minor
> Attachments: HADOOP-16916.001.patch
>
>
> HADOOP-16730 added support for Shared Access Signatures (SAS).  Azure Data 
> Lake Storage Gen2 supports a new SAS type known as User Delegation SAS.  This 
> Jira tracks an update to the ABFS driver that will include a Delegation SAS 
> generator and tests to validate that this SAS type is working correctly with 
> the driver.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17003) No Log compression and retention in Hadoop

2020-05-11 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-17003:
-
Component/s: kms

> No Log compression and retention in Hadoop
> --
>
> Key: HADOOP-17003
> URL: https://issues.apache.org/jira/browse/HADOOP-17003
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17003.001.patch
>
>
> Hadoop logging lacks several important features. Logs generated end up eating 
> disk space
> We need an implementation that satisfies the following three features:  1) 
> time-based rolling, 2) retention and 3) compression.
> For example KMS logs have no retention or compression. 
> {code:bash}
> -rw-r--r-- 1 hkms users 704M Mar 20 23:59 kms.log.2020-03-20
> -rw-r--r-- 1 hkms users 731M Mar 21 23:59 kms.log.2020-03-21
> -rw-r--r-- 1 hkms users 750M Mar 22 23:59 kms.log.2020-03-22
> -rw-r--r-- 1 hkms users 757M Mar 23 23:59 kms.log.2020-03-23
> -rw-r--r-- 1 hkms users 805M Mar 24 23:59 kms.log.2020-03-24
> -rw-r--r-- 1 hkms users 858M Mar 25 23:59 kms.log.2020-03-25
> -rw-r--r-- 1 hkms users 875M Mar 26 23:59 kms.log.2020-03-26
> -rw-r--r-- 1 hkms users 754M Mar 27 23:59 kms.log.2020-03-27
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2020-05-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104999#comment-17104999
 ] 

Hadoop QA commented on HADOOP-15565:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.1 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
13s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
13s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
42s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
37s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} branch-3.1 passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
54s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
56s{color} | {color:green} branch-3.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m 16s{color} 
| {color:red} root generated 15 new + 1301 unchanged - 15 fixed = 1316 total 
(was 1316) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
40s{color} | {color:green} root: The patch generated 0 new + 342 unchanged - 3 
fixed = 342 total (was 345) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
2m 10s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
21s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
53s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}241m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16934/artifact/out/Dockerfile
 |
| JIRA Issue | HADOOP-15565 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13002656/HADOOP-15565-branch-3.1.001.patch
 |
| Optional 

[jira] [Commented] (HADOOP-17003) No Log compression and retention in Hadoop

2020-05-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104993#comment-17104993
 ] 

Hadoop QA commented on HADOOP-17003:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
57s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
52s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-common in trunk failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
44s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
29s{color} | {color:blue} branch/hadoop-project no findbugs output file 
(findbugsXml.xml) {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
44s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 48s{color} | {color:orange} root: The patch generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 15m 
30s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
27s{color} | {color:blue} hadoop-project has no data from findbugs {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
21s{color} | {color:red} hadoop-common-project/hadoop-common generated 3 new + 
0 unchanged - 0 fixed = 3 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 47s{color} 
| {color:red} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
37s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
47s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | 

[jira] [Commented] (HADOOP-17021) Add concat fs command

2020-05-11 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104989#comment-17104989
 ] 

Jinglun commented on HADOOP-17021:
--

 Hi [~ste...@apache.org] [~weichiu], could you help to review this, thanks !

> Add concat fs command
> -
>
> Key: HADOOP-17021
> URL: https://issues.apache.org/jira/browse/HADOOP-17021
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
> Attachments: HADOOP-17021.001.patch
>
>
> We should add one concat fs command for ease of use. It concatenates existing 
> source files into the target file using FileSystem.concat().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17003) No Log compression and retention in Hadoop

2020-05-11 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17003:
---
Attachment: HADOOP-17003.001.patch
Status: Patch Available  (was: In Progress)

> No Log compression and retention in Hadoop
> --
>
> Key: HADOOP-17003
> URL: https://issues.apache.org/jira/browse/HADOOP-17003
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17003.001.patch
>
>
> Hadoop logging lacks several important features. Logs generated end up eating 
> disk space
> We need an implementation that satisfies the following three features:  1) 
> time-based rolling, 2) retention and 3) compression.
> For example KMS logs have no retention or compression. 
> {code:bash}
> -rw-r--r-- 1 hkms users 704M Mar 20 23:59 kms.log.2020-03-20
> -rw-r--r-- 1 hkms users 731M Mar 21 23:59 kms.log.2020-03-21
> -rw-r--r-- 1 hkms users 750M Mar 22 23:59 kms.log.2020-03-22
> -rw-r--r-- 1 hkms users 757M Mar 23 23:59 kms.log.2020-03-23
> -rw-r--r-- 1 hkms users 805M Mar 24 23:59 kms.log.2020-03-24
> -rw-r--r-- 1 hkms users 858M Mar 25 23:59 kms.log.2020-03-25
> -rw-r--r-- 1 hkms users 875M Mar 26 23:59 kms.log.2020-03-26
> -rw-r--r-- 1 hkms users 754M Mar 27 23:59 kms.log.2020-03-27
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17003) No Log compression and retention in Hadoop

2020-05-11 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17003 started by Ahmed Hussein.
--
> No Log compression and retention in Hadoop
> --
>
> Key: HADOOP-17003
> URL: https://issues.apache.org/jira/browse/HADOOP-17003
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> Hadoop logging lacks several important features. Logs generated end up eating 
> disk space
> We need an implementation that satisfies the following three features:  1) 
> time-based rolling, 2) retention and 3) compression.
> For example KMS logs have no retention or compression. 
> {code:bash}
> -rw-r--r-- 1 hkms users 704M Mar 20 23:59 kms.log.2020-03-20
> -rw-r--r-- 1 hkms users 731M Mar 21 23:59 kms.log.2020-03-21
> -rw-r--r-- 1 hkms users 750M Mar 22 23:59 kms.log.2020-03-22
> -rw-r--r-- 1 hkms users 757M Mar 23 23:59 kms.log.2020-03-23
> -rw-r--r-- 1 hkms users 805M Mar 24 23:59 kms.log.2020-03-24
> -rw-r--r-- 1 hkms users 858M Mar 25 23:59 kms.log.2020-03-25
> -rw-r--r-- 1 hkms users 875M Mar 26 23:59 kms.log.2020-03-26
> -rw-r--r-- 1 hkms users 754M Mar 27 23:59 kms.log.2020-03-27
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2020-05-11 Thread Virajith Jalaparti (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HADOOP-15565:

Attachment: HADOOP-15565-branch-3.1.001.patch

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.3.0, 3.2.2
>
> Attachments: HADOOP-15565-branch-3.1.001.patch, 
> HADOOP-15565-branch-3.2.001.patch, HADOOP-15565-branch-3.2.002.patch, 
> HADOOP-15565.0001.patch, HADOOP-15565.0002.patch, HADOOP-15565.0003.patch, 
> HADOOP-15565.0004.patch, HADOOP-15565.0005.patch, HADOOP-15565.0006.bak, 
> HADOOP-15565.0006.patch, HADOOP-15565.0007.patch, HADOOP-15565.0008.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2020-05-11 Thread Virajith Jalaparti (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HADOOP-15565:

Status: Patch Available  (was: Open)

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.3.0, 3.2.2
>
> Attachments: HADOOP-15565-branch-3.1.001.patch, 
> HADOOP-15565-branch-3.2.001.patch, HADOOP-15565-branch-3.2.002.patch, 
> HADOOP-15565.0001.patch, HADOOP-15565.0002.patch, HADOOP-15565.0003.patch, 
> HADOOP-15565.0004.patch, HADOOP-15565.0005.patch, HADOOP-15565.0006.bak, 
> HADOOP-15565.0006.patch, HADOOP-15565.0007.patch, HADOOP-15565.0008.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2020-05-11 Thread Virajith Jalaparti (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HADOOP-15565:

Status: Open  (was: Patch Available)

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15565-branch-3.2.001.patch, 
> HADOOP-15565-branch-3.2.002.patch, HADOOP-15565.0001.patch, 
> HADOOP-15565.0002.patch, HADOOP-15565.0003.patch, HADOOP-15565.0004.patch, 
> HADOOP-15565.0005.patch, HADOOP-15565.0006.bak, HADOOP-15565.0006.patch, 
> HADOOP-15565.0007.patch, HADOOP-15565.0008.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2020-05-11 Thread Virajith Jalaparti (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HADOOP-15565:

Fix Version/s: 3.2.2

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.3.0, 3.2.2
>
> Attachments: HADOOP-15565-branch-3.2.001.patch, 
> HADOOP-15565-branch-3.2.002.patch, HADOOP-15565.0001.patch, 
> HADOOP-15565.0002.patch, HADOOP-15565.0003.patch, HADOOP-15565.0004.patch, 
> HADOOP-15565.0005.patch, HADOOP-15565.0006.bak, HADOOP-15565.0006.patch, 
> HADOOP-15565.0007.patch, HADOOP-15565.0008.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2020-05-11 Thread Virajith Jalaparti (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104861#comment-17104861
 ] 

Virajith Jalaparti edited comment on HADOOP-15565 at 5/11/20, 7:56 PM:
---

Thanks [~umamaheswararao]. The test failures in the last jenkins run are not 
related here; the javac issues are warnings – will leave them to remain similar 
to the original patch. Will commit [^HADOOP-15565-branch-3.2.002.patch] soon. 


was (Author: virajith):
Thanks [~umamaheswararao]. The test failures in the last jenkins run are not 
related here; the javac issues are warnings – changing these will not be a 
simple backport. Will commit [^HADOOP-15565-branch-3.2.002.patch] soon. 

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15565-branch-3.2.001.patch, 
> HADOOP-15565-branch-3.2.002.patch, HADOOP-15565.0001.patch, 
> HADOOP-15565.0002.patch, HADOOP-15565.0003.patch, HADOOP-15565.0004.patch, 
> HADOOP-15565.0005.patch, HADOOP-15565.0006.bak, HADOOP-15565.0006.patch, 
> HADOOP-15565.0007.patch, HADOOP-15565.0008.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2020-05-11 Thread Virajith Jalaparti (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104861#comment-17104861
 ] 

Virajith Jalaparti commented on HADOOP-15565:
-

Thanks [~umamaheswararao]. The test failures in the last jenkins run are not 
related here; the javac issues are warnings – changing these will not be a 
simple backport. Will commit [^HADOOP-15565-branch-3.2.002.patch] soon. 

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15565-branch-3.2.001.patch, 
> HADOOP-15565-branch-3.2.002.patch, HADOOP-15565.0001.patch, 
> HADOOP-15565.0002.patch, HADOOP-15565.0003.patch, HADOOP-15565.0004.patch, 
> HADOOP-15565.0005.patch, HADOOP-15565.0006.bak, HADOOP-15565.0006.patch, 
> HADOOP-15565.0007.patch, HADOOP-15565.0008.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2020-05-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104850#comment-17104850
 ] 

Hadoop QA commented on HADOOP-15565:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
58s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
38s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
52s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 3s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
4s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
26s{color} | {color:green} branch-3.2 passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
26s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
44s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 19m 21s{color} 
| {color:red} root generated 15 new + 1350 unchanged - 15 fixed = 1365 total 
(was 1365) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
55s{color} | {color:green} root: The patch generated 0 new + 342 unchanged - 3 
fixed = 342 total (was 345) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
32s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}133m 19s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}296m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.TestDecommission |
|   | hadoop.hdfs.server.namenode.TestFSImage |
|   | hadoop.hdfs.TestFileCreation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 

[jira] [Updated] (HADOOP-15524) BytesWritable causes OOME when array size reaches Integer.MAX_VALUE

2020-05-11 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15524:
-
Status: Patch Available  (was: Open)

> BytesWritable causes OOME when array size reaches Integer.MAX_VALUE
> ---
>
> Key: HADOOP-15524
> URL: https://issues.apache.org/jira/browse/HADOOP-15524
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Joseph Smith
>Assignee: Joseph Smith
>Priority: Major
>
> BytesWritable.setSize uses Integer.MAX_VALUE to initialize the internal 
> array.  On my environment, this causes an OOME
> {code:java}
> Exception in thread "main" java.lang.OutOfMemoryError: Requested array size 
> exceeds VM limit
> {code}
> byte[Integer.MAX_VALUE-2] must be used to prevent this error.
> Tested on OSX and CentOS 7 using Java version 1.8.0_131.
> I noticed that java.util.ArrayList contains the following
> {code:java}
> /**
>  * The maximum size of array to allocate.
>  * Some VMs reserve some header words in an array.
>  * Attempts to allocate larger arrays may result in
>  * OutOfMemoryError: Requested array size exceeds VM limit
>  */
> private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
> {code}
>  
> BytesWritable.setSize should use something similar to prevent an OOME from 
> occurring.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15524) BytesWritable causes OOME when array size reaches Integer.MAX_VALUE

2020-05-11 Thread Nanda kumar (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104735#comment-17104735
 ] 

Nanda kumar commented on HADOOP-15524:
--

Thanks for the update [~arp].
I'm +1 on the change. Just retriggered Jenkins, will merge it after the build.

https://builds.apache.org/job/hadoop-multibranch/job/PR-393/10/

> BytesWritable causes OOME when array size reaches Integer.MAX_VALUE
> ---
>
> Key: HADOOP-15524
> URL: https://issues.apache.org/jira/browse/HADOOP-15524
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Joseph Smith
>Assignee: Joseph Smith
>Priority: Major
>
> BytesWritable.setSize uses Integer.MAX_VALUE to initialize the internal 
> array.  On my environment, this causes an OOME
> {code:java}
> Exception in thread "main" java.lang.OutOfMemoryError: Requested array size 
> exceeds VM limit
> {code}
> byte[Integer.MAX_VALUE-2] must be used to prevent this error.
> Tested on OSX and CentOS 7 using Java version 1.8.0_131.
> I noticed that java.util.ArrayList contains the following
> {code:java}
> /**
>  * The maximum size of array to allocate.
>  * Some VMs reserve some header words in an array.
>  * Attempts to allocate larger arrays may result in
>  * OutOfMemoryError: Requested array size exceeds VM limit
>  */
> private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
> {code}
>  
> BytesWritable.setSize should use something similar to prevent an OOME from 
> occurring.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2020-05-11 Thread Uma Maheswara Rao G (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104713#comment-17104713
 ] 

Uma Maheswara Rao G commented on HADOOP-15565:
--

+1 LGTM

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15565-branch-3.2.001.patch, 
> HADOOP-15565-branch-3.2.002.patch, HADOOP-15565.0001.patch, 
> HADOOP-15565.0002.patch, HADOOP-15565.0003.patch, HADOOP-15565.0004.patch, 
> HADOOP-15565.0005.patch, HADOOP-15565.0006.bak, HADOOP-15565.0006.patch, 
> HADOOP-15565.0007.patch, HADOOP-15565.0008.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14254) Add a Distcp option to preserve Erasure Coding attributes

2020-05-11 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104627#comment-17104627
 ] 

Wei-Chiu Chuang commented on HADOOP-14254:
--

Thanks for picking this up!

> Add a Distcp option to preserve Erasure Coding attributes
> -
>
> Key: HADOOP-14254
> URL: https://issues.apache.org/jira/browse/HADOOP-14254
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.0.0-alpha4
>Reporter: Wei-Chiu Chuang
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-14254-01.patch, HADOOP-14254-02.patch, 
> HADOOP-14254-03.patch, HADOOP-14254-04.patch, HADOOP-14254.test.patch, 
> HDFS-11472.001.patch
>
>
> Currently Distcp does not preserve the erasure coding attributes properly. I 
> propose we add a "-pe" switch to ensure erasure coded files at source are 
> copied as erasure coded files at destination.
> For example, if the src cluster has the following directories and files that 
> are copied to dest cluster
> hdfs://src/ root directory is replicated
> hdfs://src/foo erasure code enabled directory
> hdfs://src/foo/bar erasure coded file
> after distcp, hdfs://dest/foo and hdfs://dest/foo/bar will not be erasure 
> coded. 
> It may be useful to add such capability. One potential use is for disaster 
> recovery. The other use is for out-of-place cluster upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17033) Update commons-codec from 1.11 to 1.14

2020-05-11 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104624#comment-17104624
 ] 

Hudson commented on HADOOP-17033:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18234 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18234/])
HADOOP-17033. Update commons-codec from 1.11 to 1.14. (#2000) (github: rev 
bd342bef64e5b7219c6b08e585e2b122d06793e0)
* (edit) hadoop-project/pom.xml


> Update commons-codec from 1.11 to 1.14
> --
>
> Key: HADOOP-17033
> URL: https://issues.apache.org/jira/browse/HADOOP-17033
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.4.0
>
>
> We are on commons-codec 1.11 which is slightly outdated. The latest is 1.14. 
> We should update it if it's not too much of a hassle.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14254) Add a Distcp option to preserve Erasure Coding attributes

2020-05-11 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-14254:


Assignee: Ayush Saxena

> Add a Distcp option to preserve Erasure Coding attributes
> -
>
> Key: HADOOP-14254
> URL: https://issues.apache.org/jira/browse/HADOOP-14254
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.0.0-alpha4
>Reporter: Wei-Chiu Chuang
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-14254-01.patch, HADOOP-14254-02.patch, 
> HADOOP-14254-03.patch, HADOOP-14254-04.patch, HADOOP-14254.test.patch, 
> HDFS-11472.001.patch
>
>
> Currently Distcp does not preserve the erasure coding attributes properly. I 
> propose we add a "-pe" switch to ensure erasure coded files at source are 
> copied as erasure coded files at destination.
> For example, if the src cluster has the following directories and files that 
> are copied to dest cluster
> hdfs://src/ root directory is replicated
> hdfs://src/foo erasure code enabled directory
> hdfs://src/foo/bar erasure coded file
> after distcp, hdfs://dest/foo and hdfs://dest/foo/bar will not be erasure 
> coded. 
> It may be useful to add such capability. One potential use is for disaster 
> recovery. The other use is for out-of-place cluster upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14254) Add a Distcp option to preserve Erasure Coding attributes

2020-05-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104607#comment-17104607
 ] 

Hadoop QA commented on HADOOP-14254:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 26m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
41s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
37s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16932/artifact/out/Dockerfile
 |
| JIRA Issue | HADOOP-14254 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13002637/HADOOP-14254-04.patch 
|
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 02cba8b94a11 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 4c53fb9ce10 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16932/testReport/ |
| Max. process+thread count | 345 (vs. ulimit of 5500) |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 

[jira] [Resolved] (HADOOP-17033) Update commons-codec from 1.11 to 1.14

2020-05-11 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-17033.
--
Fix Version/s: 3.4.0
   Resolution: Fixed

> Update commons-codec from 1.11 to 1.14
> --
>
> Key: HADOOP-17033
> URL: https://issues.apache.org/jira/browse/HADOOP-17033
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.4.0
>
>
> We are on commons-codec 1.11 which is slightly outdated. The latest is 1.14. 
> We should update it if it's not too much of a hassle.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14254) Add a Distcp option to preserve Erasure Coding attributes

2020-05-11 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104533#comment-17104533
 ] 

Ayush Saxena commented on HADOOP-14254:
---

Have uploaded v04 same as v02 with preserveEC disabled by default and tried 
suppressing Checkstyle warning. 

> Add a Distcp option to preserve Erasure Coding attributes
> -
>
> Key: HADOOP-14254
> URL: https://issues.apache.org/jira/browse/HADOOP-14254
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.0.0-alpha4
>Reporter: Wei-Chiu Chuang
>Priority: Major
> Attachments: HADOOP-14254-01.patch, HADOOP-14254-02.patch, 
> HADOOP-14254-03.patch, HADOOP-14254-04.patch, HADOOP-14254.test.patch, 
> HDFS-11472.001.patch
>
>
> Currently Distcp does not preserve the erasure coding attributes properly. I 
> propose we add a "-pe" switch to ensure erasure coded files at source are 
> copied as erasure coded files at destination.
> For example, if the src cluster has the following directories and files that 
> are copied to dest cluster
> hdfs://src/ root directory is replicated
> hdfs://src/foo erasure code enabled directory
> hdfs://src/foo/bar erasure coded file
> after distcp, hdfs://dest/foo and hdfs://dest/foo/bar will not be erasure 
> coded. 
> It may be useful to add such capability. One potential use is for disaster 
> recovery. The other use is for out-of-place cluster upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15524) BytesWritable causes OOME when array size reaches Integer.MAX_VALUE

2020-05-11 Thread Arpit Agarwal (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104513#comment-17104513
 ] 

Arpit Agarwal commented on HADOOP-15524:


Hey [~nanda] can you take a quick look and confirm you are still +1 on this 
change? I probably missed the notification when you tagged me last time.

> BytesWritable causes OOME when array size reaches Integer.MAX_VALUE
> ---
>
> Key: HADOOP-15524
> URL: https://issues.apache.org/jira/browse/HADOOP-15524
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Joseph Smith
>Assignee: Joseph Smith
>Priority: Major
>
> BytesWritable.setSize uses Integer.MAX_VALUE to initialize the internal 
> array.  On my environment, this causes an OOME
> {code:java}
> Exception in thread "main" java.lang.OutOfMemoryError: Requested array size 
> exceeds VM limit
> {code}
> byte[Integer.MAX_VALUE-2] must be used to prevent this error.
> Tested on OSX and CentOS 7 using Java version 1.8.0_131.
> I noticed that java.util.ArrayList contains the following
> {code:java}
> /**
>  * The maximum size of array to allocate.
>  * Some VMs reserve some header words in an array.
>  * Attempts to allocate larger arrays may result in
>  * OutOfMemoryError: Requested array size exceeds VM limit
>  */
> private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
> {code}
>  
> BytesWritable.setSize should use something similar to prevent an OOME from 
> occurring.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15524) BytesWritable causes OOME when array size reaches Integer.MAX_VALUE

2020-05-11 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HADOOP-15524:
--

Assignee: Joseph Smith

> BytesWritable causes OOME when array size reaches Integer.MAX_VALUE
> ---
>
> Key: HADOOP-15524
> URL: https://issues.apache.org/jira/browse/HADOOP-15524
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Joseph Smith
>Assignee: Joseph Smith
>Priority: Major
>
> BytesWritable.setSize uses Integer.MAX_VALUE to initialize the internal 
> array.  On my environment, this causes an OOME
> {code:java}
> Exception in thread "main" java.lang.OutOfMemoryError: Requested array size 
> exceeds VM limit
> {code}
> byte[Integer.MAX_VALUE-2] must be used to prevent this error.
> Tested on OSX and CentOS 7 using Java version 1.8.0_131.
> I noticed that java.util.ArrayList contains the following
> {code:java}
> /**
>  * The maximum size of array to allocate.
>  * Some VMs reserve some header words in an array.
>  * Attempts to allocate larger arrays may result in
>  * OutOfMemoryError: Requested array size exceeds VM limit
>  */
> private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
> {code}
>  
> BytesWritable.setSize should use something similar to prevent an OOME from 
> occurring.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2020-05-11 Thread Virajith Jalaparti (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HADOOP-15565:

Status: Open  (was: Patch Available)

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15565-branch-3.2.001.patch, 
> HADOOP-15565-branch-3.2.002.patch, HADOOP-15565.0001.patch, 
> HADOOP-15565.0002.patch, HADOOP-15565.0003.patch, HADOOP-15565.0004.patch, 
> HADOOP-15565.0005.patch, HADOOP-15565.0006.bak, HADOOP-15565.0006.patch, 
> HADOOP-15565.0007.patch, HADOOP-15565.0008.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2020-05-11 Thread Virajith Jalaparti (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104509#comment-17104509
 ] 

Virajith Jalaparti commented on HADOOP-15565:
-

Good catch [~umamaheswararao]. I removed {{removeFileSystemForTesting}} in  
[^HADOOP-15565-branch-3.2.002.patch] 

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15565-branch-3.2.001.patch, 
> HADOOP-15565-branch-3.2.002.patch, HADOOP-15565.0001.patch, 
> HADOOP-15565.0002.patch, HADOOP-15565.0003.patch, HADOOP-15565.0004.patch, 
> HADOOP-15565.0005.patch, HADOOP-15565.0006.bak, HADOOP-15565.0006.patch, 
> HADOOP-15565.0007.patch, HADOOP-15565.0008.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2020-05-11 Thread Virajith Jalaparti (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HADOOP-15565:

Status: Patch Available  (was: Open)

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15565-branch-3.2.001.patch, 
> HADOOP-15565-branch-3.2.002.patch, HADOOP-15565.0001.patch, 
> HADOOP-15565.0002.patch, HADOOP-15565.0003.patch, HADOOP-15565.0004.patch, 
> HADOOP-15565.0005.patch, HADOOP-15565.0006.bak, HADOOP-15565.0006.patch, 
> HADOOP-15565.0007.patch, HADOOP-15565.0008.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13551) hook up AwsSdkMetrics to hadoop metrics

2020-05-11 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104508#comment-17104508
 ] 

Steve Loughran commented on HADOOP-13551:
-

HADOOP-16830 adds the stats but doesn't complete the wiring up; due to 
endpoint/region binding issues

> hook up AwsSdkMetrics to hadoop metrics
> ---
>
> Key: HADOOP-13551
> URL: https://issues.apache.org/jira/browse/HADOOP-13551
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> There's an API in {{com.amazonaws.metrics.AwsSdkMetrics}} to give access to 
> the internal metrics of the AWS libraries. We might want to get at those



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2020-05-11 Thread Virajith Jalaparti (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HADOOP-15565:

Attachment: HADOOP-15565-branch-3.2.002.patch

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15565-branch-3.2.001.patch, 
> HADOOP-15565-branch-3.2.002.patch, HADOOP-15565.0001.patch, 
> HADOOP-15565.0002.patch, HADOOP-15565.0003.patch, HADOOP-15565.0004.patch, 
> HADOOP-15565.0005.patch, HADOOP-15565.0006.bak, HADOOP-15565.0006.patch, 
> HADOOP-15565.0007.patch, HADOOP-15565.0008.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14254) Add a Distcp option to preserve Erasure Coding attributes

2020-05-11 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-14254:
--
Attachment: HADOOP-14254-04.patch

> Add a Distcp option to preserve Erasure Coding attributes
> -
>
> Key: HADOOP-14254
> URL: https://issues.apache.org/jira/browse/HADOOP-14254
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.0.0-alpha4
>Reporter: Wei-Chiu Chuang
>Priority: Major
> Attachments: HADOOP-14254-01.patch, HADOOP-14254-02.patch, 
> HADOOP-14254-03.patch, HADOOP-14254-04.patch, HADOOP-14254.test.patch, 
> HDFS-11472.001.patch
>
>
> Currently Distcp does not preserve the erasure coding attributes properly. I 
> propose we add a "-pe" switch to ensure erasure coded files at source are 
> copied as erasure coded files at destination.
> For example, if the src cluster has the following directories and files that 
> are copied to dest cluster
> hdfs://src/ root directory is replicated
> hdfs://src/foo erasure code enabled directory
> hdfs://src/foo/bar erasure coded file
> after distcp, hdfs://dest/foo and hdfs://dest/foo/bar will not be erasure 
> coded. 
> It may be useful to add such capability. One potential use is for disaster 
> recovery. The other use is for out-of-place cluster upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17036) TestFTPFileSystem failing as ftp server dir already exists

2020-05-11 Thread Mikhail Pryakhin (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Pryakhin updated HADOOP-17036:
--
Status: Patch Available  (was: Open)

patch available

[https://github.com/apache/hadoop/pull/2009.patch]

> TestFTPFileSystem failing as ftp server dir already exists
> --
>
> Key: HADOOP-17036
> URL: https://issues.apache.org/jira/browse/HADOOP-17036
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Mikhail Pryakhin
>Priority: Minor
>
> TestFTPFileSystem failing as the test dir exists.
> need to delete in setup/teardown of each test case



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16830) Add public IOStatistics API; S3A to support

2020-05-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16830:

Summary: Add public IOStatistics API; S3A to support  (was: Add public 
IOStatistics API; S3A to collect and report across threads)

> Add public IOStatistics API; S3A to support
> ---
>
> Key: HADOOP-16830
> URL: https://issues.apache.org/jira/browse/HADOOP-16830
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Applications like to collect the statistics which specific operations take, 
> by collecting exactly those operations done during the execution of FS API 
> calls by their individual worker threads, and returning these to their job 
> driver
> * S3A has a statistics API for some streams, but it's a non-standard one; 
> Impala  can't use it
> * FileSystem storage statistics are public, but as they aren't cross-thread, 
> they don't aggregate properly
> Proposed
> # A new IOStatistics interface to serve up statistics
> # S3A to implement
> # other stores to follow
> # Pass-through from the usual wrapper classes (FS data input/output streams)
> It's hard to think about how best to offer an API for operation context 
> stats, and how to actually implement.
> ThreadLocal isn't enough because the helper threads need to update on the 
> thread local value of the instigator
> My Initial PoC doesn't address that issue, but it shows what I'm thinking of



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17036) TestFTPFileSystem failing as ftp server dir already exists

2020-05-11 Thread Mikhail Pryakhin (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Pryakhin reassigned HADOOP-17036:
-

Assignee: Mikhail Pryakhin

> TestFTPFileSystem failing as ftp server dir already exists
> --
>
> Key: HADOOP-17036
> URL: https://issues.apache.org/jira/browse/HADOOP-17036
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Mikhail Pryakhin
>Priority: Minor
>
> TestFTPFileSystem failing as the test dir exists.
> need to delete in setup/teardown of each test case



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17036) TestFTPFileSystem failing as ftp server dir already exists

2020-05-11 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104398#comment-17104398
 ] 

Steve Loughran commented on HADOOP-17036:
-

HDFS-1820 likely cause

> TestFTPFileSystem failing as ftp server dir already exists
> --
>
> Key: HADOOP-17036
> URL: https://issues.apache.org/jira/browse/HADOOP-17036
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Priority: Minor
>
> TestFTPFileSystem failing as the test dir exists.
> need to delete in setup/teardown of each test case



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17036) TestFTPFileSystem failing as ftp server dir already exists

2020-05-11 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104395#comment-17104395
 ] 

Steve Loughran commented on HADOOP-17036:
-

{code}
[ERROR] 
testCreateWithWritePermissions(org.apache.hadoop.fs.ftp.TestFTPFileSystem)  
Time elapsed: 0.365 s  <<< ERROR!
java.nio.file.FileAlreadyExistsException: 
/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1982/src/hadoop-common-project/hadoop-common/target/test/data/2/test
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:88)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
at java.nio.file.Files.createDirectory(Files.java:674)
at org.apache.hadoop.fs.ftp.FtpTestServer.addUser(FtpTestServer.java:85)
at 
org.apache.hadoop.fs.ftp.TestFTPFileSystem.testCreateWithWritePermissions(TestFTPFileSystem.java:81)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
{code}


> TestFTPFileSystem failing as ftp server dir already exists
> --
>
> Key: HADOOP-17036
> URL: https://issues.apache.org/jira/browse/HADOOP-17036
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Priority: Minor
>
> TestFTPFileSystem failing as the test dir exists.
> need to delete in setup/teardown of each test case



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17036) TestFTPFileSystem failing as ftp server dir already exists

2020-05-11 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-17036:
---

 Summary: TestFTPFileSystem failing as ftp server dir already exists
 Key: HADOOP-17036
 URL: https://issues.apache.org/jira/browse/HADOOP-17036
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, test
Affects Versions: 3.4.0
Reporter: Steve Loughran


TestFTPFileSystem failing as the test dir exists.

need to delete in setup/teardown of each test case



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #1982: HADOOP-16830. IOStatistics API.

2020-05-11 Thread GitBox


hadoop-yetus removed a comment on pull request #1982:
URL: https://github.com/apache/hadoop/pull/1982#issuecomment-624880535


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  26m 22s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
22 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 56s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m  7s |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 47s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   3m 12s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 23s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 51s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 18s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 32s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 37s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 53s |  the patch passed  |
   | -1 :x: |  javac  |  19m 53s |  root generated 1 new + 1870 unchanged - 1 
fixed = 1871 total (was 1871)  |
   | -0 :warning: |  checkstyle  |   3m  7s |  root: The patch generated 66 new 
+ 100 unchanged - 19 fixed = 166 total (was 119)  |
   | +1 :green_heart: |  mvnsite  |   2m 31s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  1s |  The patch has 7 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  16m 36s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 40s |  hadoop-tools_hadoop-aws generated 1 new + 
4 unchanged - 0 fixed = 5 total (was 4)  |
   | +1 :green_heart: |  findbugs  |   3m 59s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  10m 34s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m  2s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 167m 51s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.io.compress.snappy.TestSnappyCompressorDecompressor |
   |   | hadoop.metrics2.source.TestJvmMetrics |
   |   | hadoop.io.compress.TestCompressorDecompressor |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1982 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint xml |
   | uname | Linux 25486c8c3ba2 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 92e3ebb4019 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/4/artifact/out/diff-compile-javac-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/4/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/4/artifact/out/whitespace-eol.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/4/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/4/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/4/testReport/ |
   | Max. process+thread count | 2533 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 

[GitHub] [hadoop] steveloughran commented on a change in pull request #1982: HADOOP-16830. IOStatistics API.

2020-05-11 Thread GitBox


steveloughran commented on a change in pull request #1982:
URL: https://github.com/apache/hadoop/pull/1982#discussion_r422988550



##
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInstrumentation.java
##
@@ -634,46 +655,56 @@ public void close() {
 
   /**
* Statistics updated by an input stream during its actual operation.
-   * These counters not thread-safe and are for use in a single instance
-   * of a stream.
+   * These counters are marked as volatile so that IOStatistics on the stream

Review comment:
   fixed
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1982: HADOOP-16830. IOStatistics API.

2020-05-11 Thread GitBox


steveloughran commented on a change in pull request #1982:
URL: https://github.com/apache/hadoop/pull/1982#discussion_r422987566



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/DynamicIOStatisticsBuilder.java
##
@@ -0,0 +1,115 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.statistics.impl;
+
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.function.ToLongFunction;
+
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+
+import static com.google.common.base.Preconditions.checkState;
+
+/**
+ * Builder of Dynamic IO Statistics.
+ * Instantiate through
+ * {@link IOStatisticsBinding#dynamicIOStatistics()}.
+ */
+public class DynamicIOStatisticsBuilder {
+
+  /**
+   * the instance being built up. Will be null after the (single)
+   * call to {@link #build()}.
+   */
+  private DynamicIOStatistics instance = new DynamicIOStatistics();
+
+  /**
+   * Add a new evaluator to the statistics being built up.
+   * @param key key of this statistic
+   * @param eval evaluator for the statistic
+   * @return the builder.
+   */
+  public DynamicIOStatisticsBuilder add(String key,

Review comment:
   via a lambda expression





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #1982: HADOOP-16830. IOStatistics API.

2020-05-11 Thread GitBox


hadoop-yetus removed a comment on pull request #1982:
URL: https://github.com/apache/hadoop/pull/1982#issuecomment-620889592


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 24s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
22 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |   0m 20s |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 23s |  root in trunk failed.  |
   | -0 :warning: |  checkstyle  |   0m 22s |  The patch fails to run 
checkstyle in root  |
   | -1 :x: |  mvnsite  |   0m 23s |  hadoop-common in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 22s |  hadoop-aws in trunk failed.  |
   | -1 :x: |  shadedclient  |   6m 25s |  branch has errors when building and 
testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 24s |  hadoop-aws in trunk failed.  |
   | +0 :ok: |  spotbugs  |   8m 15s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   0m 24s |  hadoop-common in trunk failed.  |
   | -1 :x: |  findbugs  |   0m 11s |  hadoop-aws in trunk failed.  |
   | -0 :warning: |  patch  |   8m 40s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 42s |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 19s |  hadoop-aws in the patch failed.  |
   | -1 :x: |  compile  |   0m 23s |  root in the patch failed.  |
   | -1 :x: |  javac  |   0m 23s |  root in the patch failed.  |
   | -0 :warning: |  checkstyle  |   3m 12s |  root: The patch generated 167 
new + 0 unchanged - 0 fixed = 167 total (was 0)  |
   | -1 :x: |  mvnsite  |   0m 23s |  hadoop-common in the patch failed.  |
   | -1 :x: |  mvnsite  |   0m 43s |  hadoop-aws in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | -1 :x: |  shadedclient  |   4m 50s |  patch has errors when building and 
testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 28s |  hadoop-tools_hadoop-aws generated 5 new + 
0 unchanged - 0 fixed = 5 total (was 0)  |
   | +1 :green_heart: |  findbugs  |   3m 51s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  19m 26s |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  |   0m 37s |  hadoop-aws in the patch failed.  |
   | +0 :ok: |  asflicense  |   0m 26s |  ASF License check generated no 
output?  |
   |  |   |  51m 11s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.metrics2.source.TestJvmMetrics |
   |   | hadoop.http.TestHttpServer |
   |   | hadoop.fs.TestStat |
   |   | hadoop.fs.contract.localfs.TestLocalFSContractMultipartUploader |
   |   | hadoop.ha.TestZKFailoverControllerStress |
   |   | hadoop.crypto.TestCryptoStreams |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1982 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 68a5126a219a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ab364295597 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/2/artifact/out/branch-mvninstall-root.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/2/artifact/out/branch-compile-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/2/artifact/out/buildtool-branch-checkstyle-root.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/2/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/2/artifact/out/branch-mvnsite-hadoop-tools_hadoop-aws.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/2/artifact/out/branch-javadoc-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #1982: HADOOP-16830. IOStatistics API.

2020-05-11 Thread GitBox


hadoop-yetus removed a comment on pull request #1982:
URL: https://github.com/apache/hadoop/pull/1982#issuecomment-621412721


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 40s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
22 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 57s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 17s |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 19s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 42s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 26s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 16s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 47s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 15s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 22s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 37s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 24s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 47s |  the patch passed  |
   | -1 :x: |  javac  |  16m 47s |  root generated 1 new + 1870 unchanged - 1 
fixed = 1871 total (was 1871)  |
   | -0 :warning: |  checkstyle  |   2m 48s |  root: The patch generated 67 new 
+ 100 unchanged - 19 fixed = 167 total (was 119)  |
   | +1 :green_heart: |  mvnsite  |   2m 24s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m  3s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 43s |  hadoop-tools_hadoop-aws generated 1 new + 
4 unchanged - 0 fixed = 5 total (was 4)  |
   | +1 :green_heart: |  findbugs  |   3m 40s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 22s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 40s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 123m 38s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.ftp.TestFTPFileSystem |
   |   | hadoop.metrics2.source.TestJvmMetrics |
   |   | hadoop.io.compress.snappy.TestSnappyCompressorDecompressor |
   |   | hadoop.io.compress.TestCompressorDecompressor |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1982 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 32ae6d51ba7b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9ca6298a9ac |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/3/artifact/out/diff-compile-javac-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/3/artifact/out/diff-checkstyle-root.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/3/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/3/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/3/testReport/ |
   | Max. process+thread count | 1605 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/3/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   

[GitHub] [hadoop] hadoop-yetus commented on pull request #1999: HADOOP-14566. Add seek support for SFTP FileSystem.

2020-05-11 Thread GitBox


hadoop-yetus commented on pull request #1999:
URL: https://github.com/apache/hadoop/pull/1999#issuecomment-626624253


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  22m  7s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
6 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  18m 50s |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 17s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 45s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 42s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   2m  8s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m  7s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 53s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 15s |  the patch passed  |
   | +1 :green_heart: |  javac  |  17m 15s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 47s |  
hadoop-common-project/hadoop-common: The patch generated 1 new + 19 unchanged - 
0 fixed = 20 total (was 19)  |
   | +1 :green_heart: |  mvnsite  |   1m 37s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  18m 21s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   2m 57s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 48s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 54s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 135m 19s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1999/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1999 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 535da9d2cfdb 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 328eae9a146 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1999/4/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1999/4/testReport/ |
   | Max. process+thread count | 2161 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1999/4/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tasanuma opened a new pull request #2008: HDFS-15350. Set dfs.client.failover.random.order to true as default.

2020-05-11 Thread GitBox


tasanuma opened a new pull request #2008:
URL: https://github.com/apache/hadoop/pull/2008


   JIRA: https://issues.apache.org/jira/browse/HDFS-15350



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16768) SnappyCompressor test cases wrongly assume that the compressed data is always smaller than the input data

2020-05-11 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16768:
---
Environment: 
X86/Aarch64
OS: Ubuntu 18.04, CentOS 8
Snappy 1.1.7

  was:
X86/Aarch64

OS: ubuntu 1804

JAVA 8


> SnappyCompressor test cases wrongly assume that the compressed data is always 
> smaller than the input data
> -
>
> Key: HADOOP-16768
> URL: https://issues.apache.org/jira/browse/HADOOP-16768
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, test
> Environment: X86/Aarch64
> OS: Ubuntu 18.04, CentOS 8
> Snappy 1.1.7
>Reporter: zhao bo
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
>
> * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompressInMultiThreads
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompress
> These test will fail on X86 and ARM platform.
> Trace back
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
> 12:00:33 [ERROR]   
> TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit:92  
> Expected to find 'testCompressorDecompressorWithExeedBufferLimit error !!!' 
> but got un
> expected exception: java.lang.NullPointerException
>   
>     at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:877)
>     at com.google.common.base.Joiner.toString(Joiner.java:452)
>  
>     at com.google.common.base.Joiner.appendTo(Joiner.java:109)
> 
>     at com.google.common.base.Joiner.appendTo(Joiner.java:152)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:195)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:185)
>     at com.google.common.base.Joiner.join(Joiner.java:211)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester$CompressionTestStrategy$2.assertCompression(CompressDecompressTester.java:329)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester.test(CompressDecompressTester.java:135)
>     at 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit(TestCompressorDecompressor.java:89)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>     at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>     at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>     at 
> 

[GitHub] [hadoop] aajisaka commented on pull request #1767: HADOOP-16768: DO NOT MERGE Test x86 and arm fail tests

2020-05-11 Thread GitBox


aajisaka commented on pull request #1767:
URL: https://github.com/apache/hadoop/pull/1767#issuecomment-626537595


   This issue has been fixed by #2003. Closing.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17034) Fix failure of TestSnappyCompressorDecompressor on CentOS 8

2020-05-11 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki resolved HADOOP-17034.
---
  Assignee: (was: Masatake Iwasaki)
Resolution: Duplicate

closing this as duplicate of HADOOP-16768.

> Fix failure of TestSnappyCompressorDecompressor on CentOS 8
> ---
>
> Key: HADOOP-17034
> URL: https://issues.apache.org/jira/browse/HADOOP-17034
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: CentOS Linux release 8.0.1905 (Core), 
> snappy-devel-1.1.7-5.el8.x86_64
>Reporter: Masatake Iwasaki
>Priority: Major
>
> testSnappyCompressDecompress testSnappyCompressDecompressInMultiThreads 
> reproducibly fails on CentOS 8. These tests has no issue on CentOS 7.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16768) SnappyCompressor test cases wrongly assume that the compressed data is always smaller than the input data

2020-05-11 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16768:
---
Fix Version/s: 3.4.0
   3.3.1
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Merged the PR into trunk and branch-3.3.

> SnappyCompressor test cases wrongly assume that the compressed data is always 
> smaller than the input data
> -
>
> Key: HADOOP-16768
> URL: https://issues.apache.org/jira/browse/HADOOP-16768
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, test
> Environment: X86/Aarch64
> OS: ubuntu 1804
> JAVA 8
>Reporter: zhao bo
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
>
> * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompressInMultiThreads
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompress
> These test will fail on X86 and ARM platform.
> Trace back
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
> 12:00:33 [ERROR]   
> TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit:92  
> Expected to find 'testCompressorDecompressorWithExeedBufferLimit error !!!' 
> but got un
> expected exception: java.lang.NullPointerException
>   
>     at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:877)
>     at com.google.common.base.Joiner.toString(Joiner.java:452)
>  
>     at com.google.common.base.Joiner.appendTo(Joiner.java:109)
> 
>     at com.google.common.base.Joiner.appendTo(Joiner.java:152)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:195)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:185)
>     at com.google.common.base.Joiner.join(Joiner.java:211)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester$CompressionTestStrategy$2.assertCompression(CompressDecompressTester.java:329)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester.test(CompressDecompressTester.java:135)
>     at 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit(TestCompressorDecompressor.java:89)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>     at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>     at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>     at 
> 

[jira] [Commented] (HADOOP-16768) SnappyCompressor test cases wrongly assume that the compressed data is always smaller than the input data

2020-05-11 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104105#comment-17104105
 ] 

Hudson commented on HADOOP-16768:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18232 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18232/])
HADOOP-16768. SnappyCompressor test cases wrongly assume that the (github: rev 
328eae9a146b2dd9857a17a0db6fcddb1de23a0d)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/snappy/TestSnappyCompressorDecompressor.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/CompressDecompressTester.java


> SnappyCompressor test cases wrongly assume that the compressed data is always 
> smaller than the input data
> -
>
> Key: HADOOP-16768
> URL: https://issues.apache.org/jira/browse/HADOOP-16768
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, test
> Environment: X86/Aarch64
> OS: ubuntu 1804
> JAVA 8
>Reporter: zhao bo
>Assignee: Akira Ajisaka
>Priority: Major
>
> * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompressInMultiThreads
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompress
> These test will fail on X86 and ARM platform.
> Trace back
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
> 12:00:33 [ERROR]   
> TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit:92  
> Expected to find 'testCompressorDecompressorWithExeedBufferLimit error !!!' 
> but got un
> expected exception: java.lang.NullPointerException
>   
>     at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:877)
>     at com.google.common.base.Joiner.toString(Joiner.java:452)
>  
>     at com.google.common.base.Joiner.appendTo(Joiner.java:109)
> 
>     at com.google.common.base.Joiner.appendTo(Joiner.java:152)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:195)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:185)
>     at com.google.common.base.Joiner.join(Joiner.java:211)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester$CompressionTestStrategy$2.assertCompression(CompressDecompressTester.java:329)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester.test(CompressDecompressTester.java:135)
>     at 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit(TestCompressorDecompressor.java:89)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>     at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>     at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>     at 
>