[jira] [Commented] (HDDS-108) Update Node2ContainerMap while processing container reports

2018-05-28 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16493110#comment-16493110
 ] 

Shashikant Banerjee commented on HDDS-108:
--

Thanks [~msingh], for the review. patch v2 Addresses your review comments.

1. Should ContainerSupervisor.java be removed in this patch as well ?

I think we can reuse some of the logic in here to rewrite container report 
processing. I think let's keep it for now and we can remove as required.

> Update Node2ContainerMap while processing container reports
> ---
>
> Key: HDDS-108
> URL: https://issues.apache.org/jira/browse/HDDS-108
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-108.00.patch, HDDS-108.01.patch, HDDS-108.02.patch
>
>
> When the container report comes, the Node2Container Map should be updated via 
> SCMContainerManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-108) Update Node2ContainerMap while processing container reports

2018-05-28 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-108:
-
Attachment: HDDS-108.02.patch

> Update Node2ContainerMap while processing container reports
> ---
>
> Key: HDDS-108
> URL: https://issues.apache.org/jira/browse/HDDS-108
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-108.00.patch, HDDS-108.01.patch, HDDS-108.02.patch
>
>
> When the container report comes, the Node2Container Map should be updated via 
> SCMContainerManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13632) Randomize baseDir for MiniJournalCluster in MiniQJMHACluster for TestDFSAdminWithHA

2018-05-28 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16493106#comment-16493106
 ] 

genericqa commented on HDFS-13632:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 15 unchanged - 0 fixed = 16 total (was 15) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}178m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13632 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925483/HDFS-13632.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 463a5baedc7d 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 438ef49 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (HDFS-13631) TestDFSAdmin#testCheckNumOfBlocksInReportCommand should use a separate MiniDFSCluster path

2018-05-28 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16493080#comment-16493080
 ] 

genericqa commented on HDFS-13631:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}111m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}174m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.TestReencryption |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13631 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925481/HDFS-13631.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7d1dc0e83b0c 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 438ef49 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24319/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24319/testReport/ |
| Max. process+thread count | 2791 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Commented] (HDFS-13632) Randomize baseDir for MiniJournalCluster in MiniQJMHACluster for TestDFSAdminWithHA

2018-05-28 Thread Anbang Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16493004#comment-16493004
 ] 

Anbang Hu commented on HDFS-13632:
--

 [^HDFS-13632.000.patch] is for trunk.
 [^HDFS-13632-branch-2.000.patch] is for branch-2.
[~elgoiri] can you help review?

> Randomize baseDir for MiniJournalCluster in MiniQJMHACluster for 
> TestDFSAdminWithHA 
> 
>
> Key: HDFS-13632
> URL: https://issues.apache.org/jira/browse/HDFS-13632
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
> Attachments: HDFS-13632-branch-2.000.patch, HDFS-13632.000.patch
>
>
> As [HDFS-13630|https://issues.apache.org/jira/browse/HDFS-13630] indicates, 
> testUpgradeCommand keeps journalnode directory from being released, which 
> fails all subsequent tests that try to use the same path.
> Randomizing the baseDir for MiniJournalCluster in MiniQJMHACluster for 
> TestDFSAdminWithHA can isolate effects of tests from each other.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13632) Randomize baseDir for MiniJournalCluster in MiniQJMHACluster for TestDFSAdminWithHA

2018-05-28 Thread Anbang Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13632:
-
Attachment: HDFS-13632.000.patch

> Randomize baseDir for MiniJournalCluster in MiniQJMHACluster for 
> TestDFSAdminWithHA 
> 
>
> Key: HDFS-13632
> URL: https://issues.apache.org/jira/browse/HDFS-13632
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
> Attachments: HDFS-13632-branch-2.000.patch, HDFS-13632.000.patch
>
>
> As [HDFS-13630|https://issues.apache.org/jira/browse/HDFS-13630] indicates, 
> testUpgradeCommand keeps journalnode directory from being released, which 
> fails all subsequent tests that try to use the same path.
> Randomizing the baseDir for MiniJournalCluster in MiniQJMHACluster for 
> TestDFSAdminWithHA can isolate effects of tests from each other.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13632) Randomize baseDir for MiniJournalCluster in MiniQJMHACluster for TestDFSAdminWithHA

2018-05-28 Thread Anbang Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13632:
-
Attachment: HDFS-13632-branch-2.000.patch
Status: Patch Available  (was: Open)

> Randomize baseDir for MiniJournalCluster in MiniQJMHACluster for 
> TestDFSAdminWithHA 
> 
>
> Key: HDFS-13632
> URL: https://issues.apache.org/jira/browse/HDFS-13632
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
> Attachments: HDFS-13632-branch-2.000.patch, HDFS-13632.000.patch
>
>
> As [HDFS-13630|https://issues.apache.org/jira/browse/HDFS-13630] indicates, 
> testUpgradeCommand keeps journalnode directory from being released, which 
> fails all subsequent tests that try to use the same path.
> Randomizing the baseDir for MiniJournalCluster in MiniQJMHACluster for 
> TestDFSAdminWithHA can isolate effects of tests from each other.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13632) Randomize baseDir for MiniJournalCluster in MiniQJMHACluster for TestDFSAdminWithHA

2018-05-28 Thread Anbang Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13632:
-
Description: 
As [HDFS-13630|https://issues.apache.org/jira/browse/HDFS-13630] indicates, 
testUpgradeCommand keeps journalnode directory from being released, which fails 
all subsequent tests that try to use the same path.

Randomizing the baseDir for MiniJournalCluster in MiniQJMHACluster for 
TestDFSAdminWithHA can isolate effects of tests from each other.

> Randomize baseDir for MiniJournalCluster in MiniQJMHACluster for 
> TestDFSAdminWithHA 
> 
>
> Key: HDFS-13632
> URL: https://issues.apache.org/jira/browse/HDFS-13632
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>
> As [HDFS-13630|https://issues.apache.org/jira/browse/HDFS-13630] indicates, 
> testUpgradeCommand keeps journalnode directory from being released, which 
> fails all subsequent tests that try to use the same path.
> Randomizing the baseDir for MiniJournalCluster in MiniQJMHACluster for 
> TestDFSAdminWithHA can isolate effects of tests from each other.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13632) Randomize baseDir for MiniJournalCluster in MiniQJMHACluster for TestDFSAdminWithHA

2018-05-28 Thread Anbang Hu (JIRA)
Anbang Hu created HDFS-13632:


 Summary: Randomize baseDir for MiniJournalCluster in 
MiniQJMHACluster for TestDFSAdminWithHA 
 Key: HDFS-13632
 URL: https://issues.apache.org/jira/browse/HDFS-13632
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Anbang Hu
Assignee: Anbang Hu






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13626) When the setOwner operation was denied,The logging username is not appropriate

2018-05-28 Thread luhuachao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDFS-13626:
-
Description: 
when do the chown operation on target file /tmp/test with user 'root' to user 
'hive', the log displays 'User hive is not a super user' ;This appropriate log 
here should be 'User root is not a super user'
{code:java}
[root@lhccmh1 ~]# hdfs dfs -ls /tmp/test
rw-rr- 3 root hdfs 0 2018-05-28 10:33 /tmp/test
[root@lhccmh1 ~]# hdfs dfs -chown hive /tmp/test
chown: changing ownership of '/tmp/test': User hive is not a super user 
(non-super user cannot change owner).{code}
The last version patch of issue HDFS-10455 use username but not pc.getUser() in 
logs;

 
{code:java}
 
   if (!pc.isSuperUser()) {
 if (username != null && !pc.getUser().equals(username)) {
-  throw new AccessControlException("Non-super user cannot change 
owner");
+  throw new AccessControlException("User " + username
+  + " is not a super user (non-super user cannot change owner).");
 }
 if (group != null && !pc.isMemberOfGroup(group)) {
-  throw new AccessControlException("User does not belong to " + group);
+  throw new AccessControlException(
+  "User " + username + " does not belong to " + group);
 }
   } {code}
 

  was:
when do the chown operation on target file /tmp/test with user 'root' to user 
'hive', the log displays 'User hive is not a super user' ;This appropriate log 
here should be 'User root is not a super user'

[root@lhccmh1 ~]# hdfs dfs -ls /tmp/test
 -rw-r--r-- 3 root hdfs 0 2018-05-28 10:33 /tmp/test
 [root@lhccmh1 ~]# hdfs dfs -chown hive /tmp/test
 chown: changing ownership of '/tmp/test': User hive is not a super user 
(non-super user cannot change owner).

The last version patch of issue HDFS-10455 use username but not pc.getUser() in 
logs;

if (!pc.isSuperUser()) {
 if (username != null && !pc.getUser().equals(username)) {
- throw new AccessControlException("Non-super user cannot change owner");
+ throw new AccessControlException("User " + username
+ + " is not a super user (non-super user cannot change owner).");
 }
 if (group != null && !pc.isMemberOfGroup(group)) {
- throw new AccessControlException("User does not belong to " + group);
+ throw new AccessControlException(
+ "User " + username + " does not belong to " + group);
 }
 }

 

 

 


> When the setOwner operation was denied,The logging username is not appropriate
> --
>
> Key: HDFS-13626
> URL: https://issues.apache.org/jira/browse/HDFS-13626
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha2
> Environment: hadoop 2.8.2
>Reporter: luhuachao
>Priority: Minor
>
> when do the chown operation on target file /tmp/test with user 'root' to user 
> 'hive', the log displays 'User hive is not a super user' ;This appropriate 
> log here should be 'User root is not a super user'
> {code:java}
> [root@lhccmh1 ~]# hdfs dfs -ls /tmp/test
> rw-rr- 3 root hdfs 0 2018-05-28 10:33 /tmp/test
> [root@lhccmh1 ~]# hdfs dfs -chown hive /tmp/test
> chown: changing ownership of '/tmp/test': User hive is not a super user 
> (non-super user cannot change owner).{code}
> The last version patch of issue HDFS-10455 use username but not pc.getUser() 
> in logs;
>  
> {code:java}
>  
>if (!pc.isSuperUser()) {
>  if (username != null && !pc.getUser().equals(username)) {
> -  throw new AccessControlException("Non-super user cannot change 
> owner");
> +  throw new AccessControlException("User " + username
> +  + " is not a super user (non-super user cannot change 
> owner).");
>  }
>  if (group != null && !pc.isMemberOfGroup(group)) {
> -  throw new AccessControlException("User does not belong to " + 
> group);
> +  throw new AccessControlException(
> +  "User " + username + " does not belong to " + group);
>  }
>} {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13626) When the setOwner operation was denied,The logging username is not appropriate

2018-05-28 Thread luhuachao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDFS-13626:
-
Description: 
when do the chown operation on target file /tmp/test with user 'root' to user 
'hive', the log displays 'User hive is not a super user' ;This appropriate log 
here should be 'User root is not a super user'
{code:java}
[root@lhccmh1 ~]# hdfs dfs -ls /tmp/test
rw-rr- 3 root hdfs 0 2018-05-28 10:33 /tmp/test
[root@lhccmh1 ~]# hdfs dfs -chown hive /tmp/test
chown: changing ownership of '/tmp/test': User hive is not a super user 
(non-super user cannot change owner).{code}
The last version patch of issue HDFS-10455 use username but not pc.getUser() in 
logs; 
{code:java}
 
   if (!pc.isSuperUser()) {
 if (username != null && !pc.getUser().equals(username)) {
-  throw new AccessControlException("Non-super user cannot change 
owner");
+  throw new AccessControlException("User " + username
+  + " is not a super user (non-super user cannot change owner).");
 }
 if (group != null && !pc.isMemberOfGroup(group)) {
-  throw new AccessControlException("User does not belong to " + group);
+  throw new AccessControlException(
+  "User " + username + " does not belong to " + group);
 }
   } {code}
 

  was:
when do the chown operation on target file /tmp/test with user 'root' to user 
'hive', the log displays 'User hive is not a super user' ;This appropriate log 
here should be 'User root is not a super user'
{code:java}
[root@lhccmh1 ~]# hdfs dfs -ls /tmp/test
rw-rr- 3 root hdfs 0 2018-05-28 10:33 /tmp/test
[root@lhccmh1 ~]# hdfs dfs -chown hive /tmp/test
chown: changing ownership of '/tmp/test': User hive is not a super user 
(non-super user cannot change owner).{code}
The last version patch of issue HDFS-10455 use username but not pc.getUser() in 
logs;

 
{code:java}
 
   if (!pc.isSuperUser()) {
 if (username != null && !pc.getUser().equals(username)) {
-  throw new AccessControlException("Non-super user cannot change 
owner");
+  throw new AccessControlException("User " + username
+  + " is not a super user (non-super user cannot change owner).");
 }
 if (group != null && !pc.isMemberOfGroup(group)) {
-  throw new AccessControlException("User does not belong to " + group);
+  throw new AccessControlException(
+  "User " + username + " does not belong to " + group);
 }
   } {code}
 


> When the setOwner operation was denied,The logging username is not appropriate
> --
>
> Key: HDFS-13626
> URL: https://issues.apache.org/jira/browse/HDFS-13626
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha2
> Environment: hadoop 2.8.2
>Reporter: luhuachao
>Priority: Minor
>
> when do the chown operation on target file /tmp/test with user 'root' to user 
> 'hive', the log displays 'User hive is not a super user' ;This appropriate 
> log here should be 'User root is not a super user'
> {code:java}
> [root@lhccmh1 ~]# hdfs dfs -ls /tmp/test
> rw-rr- 3 root hdfs 0 2018-05-28 10:33 /tmp/test
> [root@lhccmh1 ~]# hdfs dfs -chown hive /tmp/test
> chown: changing ownership of '/tmp/test': User hive is not a super user 
> (non-super user cannot change owner).{code}
> The last version patch of issue HDFS-10455 use username but not pc.getUser() 
> in logs; 
> {code:java}
>  
>if (!pc.isSuperUser()) {
>  if (username != null && !pc.getUser().equals(username)) {
> -  throw new AccessControlException("Non-super user cannot change 
> owner");
> +  throw new AccessControlException("User " + username
> +  + " is not a super user (non-super user cannot change 
> owner).");
>  }
>  if (group != null && !pc.isMemberOfGroup(group)) {
> -  throw new AccessControlException("User does not belong to " + 
> group);
> +  throw new AccessControlException(
> +  "User " + username + " does not belong to " + group);
>  }
>} {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13631) TestDFSAdmin#testCheckNumOfBlocksInReportCommand should use a separate MiniDFSCluster path

2018-05-28 Thread Anbang Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13631:
-
Attachment: HDFS-13631.000.patch
Status: Patch Available  (was: Open)

> TestDFSAdmin#testCheckNumOfBlocksInReportCommand should use a separate 
> MiniDFSCluster path
> --
>
> Key: HDFS-13631
> URL: https://issues.apache.org/jira/browse/HDFS-13631
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13631.000.patch
>
>
> [TestDFSAdmin#testCheckNumOfBlocksInReportCommand|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdmin/testCheckNumOfBlocksInReportCommand/]
>  fails with error message:
> {color:#d04437}Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\name-0-1{color}
> because testCheckNumOfBlocksInReportCommand is starting a new MiniDFSCluster 
> with the same base path as the one in @Before



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13626) When the setOwner operation was denied,The logging username is not appropriate

2018-05-28 Thread luhuachao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDFS-13626:
-
Description: 
when do the chown operation on target file /tmp/test with user 'root' to user 
'hive', the log displays 'User hive is not a super user' ;This appropriate log 
here should be 'User root is not a super user'

[root@lhccmh1 ~]# hdfs dfs -ls /tmp/test
 -rw-r--r-- 3 root hdfs 0 2018-05-28 10:33 /tmp/test
 [root@lhccmh1 ~]# hdfs dfs -chown hive /tmp/test
 chown: changing ownership of '/tmp/test': User hive is not a super user 
(non-super user cannot change owner).

The last version patch of issue HDFS-10455 use username but not pc.getUser() in 
logs;

if (!pc.isSuperUser()) {
 if (username != null && !pc.getUser().equals(username)) {
- throw new AccessControlException("Non-super user cannot change owner");
+ throw new AccessControlException("User " + username
+ + " is not a super user (non-super user cannot change owner).");
 }
 if (group != null && !pc.isMemberOfGroup(group)) {
- throw new AccessControlException("User does not belong to " + group);
+ throw new AccessControlException(
+ "User " + username + " does not belong to " + group);
 }
 }

 

 

 

  was:
when do the chown operation on target file /tmp/test with user 'root' to user 
'hive', the log displays 'User hive is not a super user' ;This appropriate log 
here should be 'User root is not a super user'

[root@lhccmh1 ~]# hdfs dfs -ls /tmp/test
 -rw-r--r-- 3 root hdfs 0 2018-05-28 10:33 /tmp/test
 [root@lhccmh1 ~]# hdfs dfs -chown hive /tmp/test
 chown: changing ownership of '/tmp/test': User hive is not a super user 
(non-super user cannot change owner).

The last version patch of issue HDFS-10455 use username but not pc.getUser() in 
logs;

   if (!pc.isSuperUser()) {
 if (username != null && !pc.getUser().equals(username)) {
-  throw new AccessControlException("Non-super user cannot change 
owner");
+  throw new AccessControlException("User " + *username*
+  + " is not a super user (non-super user cannot change owner).");
 }
 if (group != null && !pc.isMemberOfGroup(group)) {
-  throw new AccessControlException("User does not belong to " + group);
+  throw new AccessControlException(
+  "User " + username + " does not belong to " + group);
 }
   }
 

 

 


> When the setOwner operation was denied,The logging username is not appropriate
> --
>
> Key: HDFS-13626
> URL: https://issues.apache.org/jira/browse/HDFS-13626
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha2
> Environment: hadoop 2.8.2
>Reporter: luhuachao
>Priority: Minor
>
> when do the chown operation on target file /tmp/test with user 'root' to user 
> 'hive', the log displays 'User hive is not a super user' ;This appropriate 
> log here should be 'User root is not a super user'
> [root@lhccmh1 ~]# hdfs dfs -ls /tmp/test
>  -rw-r--r-- 3 root hdfs 0 2018-05-28 10:33 /tmp/test
>  [root@lhccmh1 ~]# hdfs dfs -chown hive /tmp/test
>  chown: changing ownership of '/tmp/test': User hive is not a super user 
> (non-super user cannot change owner).
> The last version patch of issue HDFS-10455 use username but not pc.getUser() 
> in logs;
> if (!pc.isSuperUser()) {
>  if (username != null && !pc.getUser().equals(username)) {
> - throw new AccessControlException("Non-super user cannot change owner");
> + throw new AccessControlException("User " + username
> + + " is not a super user (non-super user cannot change owner).");
>  }
>  if (group != null && !pc.isMemberOfGroup(group)) {
> - throw new AccessControlException("User does not belong to " + group);
> + throw new AccessControlException(
> + "User " + username + " does not belong to " + group);
>  }
>  }
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13631) TestDFSAdmin#testCheckNumOfBlocksInReportCommand should use a separate MiniDFSCluster path

2018-05-28 Thread Anbang Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13631:
-
Description: 
[TestDFSAdmin#testCheckNumOfBlocksInReportCommand|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdmin/testCheckNumOfBlocksInReportCommand/]
 fails with error message:
{color:#d04437}Could not fully delete 
F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\name-0-1{color}
because testCheckNumOfBlocksInReportCommand is starting a new MiniDFSCluster 
with the same base path as the one in @Before

> TestDFSAdmin#testCheckNumOfBlocksInReportCommand should use a separate 
> MiniDFSCluster path
> --
>
> Key: HDFS-13631
> URL: https://issues.apache.org/jira/browse/HDFS-13631
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
>
> [TestDFSAdmin#testCheckNumOfBlocksInReportCommand|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdmin/testCheckNumOfBlocksInReportCommand/]
>  fails with error message:
> {color:#d04437}Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\name-0-1{color}
> because testCheckNumOfBlocksInReportCommand is starting a new MiniDFSCluster 
> with the same base path as the one in @Before



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13631) TestDFSAdmin#testCheckNumOfBlocksInReportCommand should use a separate MiniDFSCluster path

2018-05-28 Thread Anbang Hu (JIRA)
Anbang Hu created HDFS-13631:


 Summary: TestDFSAdmin#testCheckNumOfBlocksInReportCommand should 
use a separate MiniDFSCluster path
 Key: HDFS-13631
 URL: https://issues.apache.org/jira/browse/HDFS-13631
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Anbang Hu
Assignee: Anbang Hu






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13563) TestDFSAdminWithHA times out on Windows

2018-05-28 Thread Anbang Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492979#comment-16492979
 ] 

Anbang Hu edited comment on HDFS-13563 at 5/29/18 12:42 AM:


[~elgoiri], there are 4 timeout tests in this test class, but they are not the 
reason for the other 32 failures. It is the restarting NN1 with -upgrade option 
in testUpgradeCommand that did not release the journalnode directory. I opened 
[HDFS-13630|https://issues.apache.org/jira/browse/HDFS-13630] for discussion.

In addition, I would vote for having randomized journalnode path for 
MiniQJMHACluster. But we should leave 
[HDFS-13630|https://issues.apache.org/jira/browse/HDFS-13630] open until root 
cause is found.


was (Author: huanbang1993):
[~elgoiri], there are 4 timeout tests in this test class, but they are not the 
reason for the other 32 failures. It is the restarting NN1 with -upgrade option 
in testUpgradeCommand that did not release the journalnode directory. I opened 
[HDFS-13630|https://issues.apache.org/jira/browse/HDFS-13630] for discussion.

In addition, I would vote for having randomized MiniDFSCluster base path for 
this test class. But we should leave 
[HDFS-13630|https://issues.apache.org/jira/browse/HDFS-13630] open until root 
cause is found.

> TestDFSAdminWithHA times out on Windows
> ---
>
> Key: HDFS-13563
> URL: https://issues.apache.org/jira/browse/HDFS-13563
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13563.000.patch, HDFS-13563.001.patch
>
>
> {color:#33}[Daily Windows 
> build|https://builds.apache.org/job/hadoop-trunk-win/467/testReport/] shows 
> TestDFSAdminWithHA has 4 timeout tests with "{color}test timed out after 
> 3 milliseconds{color:#33}"{color}
> {code:java}
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshUserToGroupsMappingsNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshServiceAclNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshCallQueueNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshSuperUserGroupsConfigurationNN1DownNN2Down
> {code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13563) TestDFSAdminWithHA times out on Windows

2018-05-28 Thread Anbang Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492979#comment-16492979
 ] 

Anbang Hu commented on HDFS-13563:
--

[~elgoiri], there are 4 timeout tests in this test class, but they are not the 
reason for the other 32 failures. It is the restarting NN1 with -upgrade option 
in testUpgradeCommand that did not release the journalnode directory. I opened 
[HDFS-13630|https://issues.apache.org/jira/browse/HDFS-13630] for discussion.

In addition, I would vote for having randomized MiniDFSCluster base path for 
this test class. But we should leave 
[HDFS-13630|https://issues.apache.org/jira/browse/HDFS-13630] open until root 
cause is found.

> TestDFSAdminWithHA times out on Windows
> ---
>
> Key: HDFS-13563
> URL: https://issues.apache.org/jira/browse/HDFS-13563
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13563.000.patch, HDFS-13563.001.patch
>
>
> {color:#33}[Daily Windows 
> build|https://builds.apache.org/job/hadoop-trunk-win/467/testReport/] shows 
> TestDFSAdminWithHA has 4 timeout tests with "{color}test timed out after 
> 3 milliseconds{color:#33}"{color}
> {code:java}
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshUserToGroupsMappingsNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshServiceAclNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshCallQueueNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshSuperUserGroupsConfigurationNN1DownNN2Down
> {code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13630) testUpgradeCommand fails following tests in TestDFSAdminWithHA on Windows

2018-05-28 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492976#comment-16492976
 ] 

Íñigo Goiri commented on HDFS-13630:


Yes, let's go with the randomized path.

> testUpgradeCommand fails following tests in TestDFSAdminWithHA on Windows
> -
>
> Key: HDFS-13630
> URL: https://issues.apache.org/jira/browse/HDFS-13630
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Priority: Minor
>
> 32 tests in 
> [TestDFSAdminWithHA|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdminWithHA/]
>  fail on Windows after testUpgradeCommand with error message:
> Could not format one or more JournalNodes. 1 exceptions thrown:
> {color:#d04437}127.0.0.1:58098: Directory 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\journalnode-0\ns1
>  is in an inconsistent state: Can't format the storage directory because the 
> current directory is not empty.
>  at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:600)
>  at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:683)
>  at 
> org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:210)
>  at org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:236)
>  at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:181)
>  at 
> org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:148)
>  at 
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27399)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1687)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682){color}
> Restarting NN1 with -upgrade option seems to keep the journalnode directory 
> from being released after testUpgradeCommand.
> {code:java}
> // Start NN1 with -upgrade option
> dfsCluster.getNameNodeInfos()[0].setStartOpt(
> HdfsServerConstants.StartupOption.UPGRADE);
> dfsCluster.restartNameNode(0, true);
> {code}
> branch-2 does not have this issue, because there is no testUpgradeCommand in 
> branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13630) testUpgradeCommand fails following tests in TestDFSAdminWithHA on Windows

2018-05-28 Thread Anbang Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13630:
-
Description: 
32 tests in 
[TestDFSAdminWithHA|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdminWithHA/]
 fail on Windows after testUpgradeCommand with error message:
Could not format one or more JournalNodes. 1 exceptions thrown:
{color:#d04437}127.0.0.1:58098: Directory 
F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\journalnode-0\ns1
 is in an inconsistent state: Can't format the storage directory because the 
current directory is not empty.
 at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:600)
 at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:683)
 at org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:210)
 at org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:236)
 at 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:181)
 at 
org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:148)
 at 
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27399)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1687)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682){color}

Restarting NN1 with -upgrade option seems to keep the journalnode directory 
from being released after testUpgradeCommand.
{code:java}
// Start NN1 with -upgrade option
dfsCluster.getNameNodeInfos()[0].setStartOpt(
HdfsServerConstants.StartupOption.UPGRADE);
dfsCluster.restartNameNode(0, true);
{code}

branch-2 does not have this issue, because there is no testUpgradeCommand in 
branch-2.

  was:
32 tests in 
[TestDFSAdminWithHA|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdminWithHA/]
 fail on Windows after testUpgradeCommand with error message:
Could not format one or more JournalNodes. 1 exceptions thrown:
{color:#d04437}127.0.0.1:58098: Directory 
F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\journalnode-0\ns1
 is in an inconsistent state: Can't format the storage directory because the 
current directory is not empty.
 at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:600)
 at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:683)
 at org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:210)
 at org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:236)
 at 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:181)
 at 
org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:148)
 at 
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27399)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1687)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682){color}

Restarting NN1 with -upgrade option seems to keep the journalnode directory 
from being released after testUpgradeCommand.
{code:java}
// Start NN1 with -upgrade option
dfsCluster.getNameNodeInfos()[0].setStartOpt(
HdfsServerConstants.StartupOption.UPGRADE);
dfsCluster.restartNameNode(0, true);
{code}



> testUpgradeCommand fails following tests in TestDFSAdminWithHA on Windows
> -
>
> Key: HDFS-13630
> URL: https://issues.apache.org/jira/browse/HDFS-13630
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>   

[jira] [Comment Edited] (HDFS-13630) testUpgradeCommand fails following tests in TestDFSAdminWithHA on Windows

2018-05-28 Thread Anbang Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492975#comment-16492975
 ] 

Anbang Hu edited comment on HDFS-13630 at 5/29/18 12:31 AM:


I have not figured out the root cause to this, but would suggest randomize the 
base directory for MiniDFSCluster to isolate tests from each other.
[~elgoiri] any thoughts here?


was (Author: huanbang1993):
I have not figured out the root cause to this, but would suggest randomize the 
base directory for MiniDFSCluster to isolate tests from each other.

> testUpgradeCommand fails following tests in TestDFSAdminWithHA on Windows
> -
>
> Key: HDFS-13630
> URL: https://issues.apache.org/jira/browse/HDFS-13630
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Priority: Minor
>
> 32 tests in 
> [TestDFSAdminWithHA|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdminWithHA/]
>  fail on Windows after testUpgradeCommand with error message:
> Could not format one or more JournalNodes. 1 exceptions thrown:
> {color:#d04437}127.0.0.1:58098: Directory 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\journalnode-0\ns1
>  is in an inconsistent state: Can't format the storage directory because the 
> current directory is not empty.
>  at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:600)
>  at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:683)
>  at 
> org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:210)
>  at org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:236)
>  at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:181)
>  at 
> org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:148)
>  at 
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27399)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1687)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682){color}
> Restarting NN1 with -upgrade option seems to keep the journalnode directory 
> from being released after testUpgradeCommand.
> {code:java}
> // Start NN1 with -upgrade option
> dfsCluster.getNameNodeInfos()[0].setStartOpt(
> HdfsServerConstants.StartupOption.UPGRADE);
> dfsCluster.restartNameNode(0, true);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13630) testUpgradeCommand fails following tests in TestDFSAdminWithHA on Windows

2018-05-28 Thread Anbang Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492975#comment-16492975
 ] 

Anbang Hu commented on HDFS-13630:
--

I have not figured out the root cause to this, but would suggest randomize the 
base directory for MiniDFSCluster to isolate tests from each other.

> testUpgradeCommand fails following tests in TestDFSAdminWithHA on Windows
> -
>
> Key: HDFS-13630
> URL: https://issues.apache.org/jira/browse/HDFS-13630
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Priority: Minor
>
> 32 tests in 
> [TestDFSAdminWithHA|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdminWithHA/]
>  fail on Windows after testUpgradeCommand with error message:
> Could not format one or more JournalNodes. 1 exceptions thrown:
> {color:#d04437}127.0.0.1:58098: Directory 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\journalnode-0\ns1
>  is in an inconsistent state: Can't format the storage directory because the 
> current directory is not empty.
>  at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:600)
>  at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:683)
>  at 
> org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:210)
>  at org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:236)
>  at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:181)
>  at 
> org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:148)
>  at 
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27399)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1687)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682){color}
> Restarting NN1 with -upgrade option seems to keep the journalnode directory 
> from being released after testUpgradeCommand.
> {code:java}
> // Start NN1 with -upgrade option
> dfsCluster.getNameNodeInfos()[0].setStartOpt(
> HdfsServerConstants.StartupOption.UPGRADE);
> dfsCluster.restartNameNode(0, true);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13630) testUpgradeCommand fails following tests in TestDFSAdminWithHA on Windows

2018-05-28 Thread Anbang Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13630:
-
Description: 
32 tests in 
[TestDFSAdminWithHA|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdminWithHA/]
 fail on Windows after testUpgradeCommand with error message:
Could not format one or more JournalNodes. 1 exceptions thrown:
{color:#d04437}127.0.0.1:58098: Directory 
F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\journalnode-0\ns1
 is in an inconsistent state: Can't format the storage directory because the 
current directory is not empty.
 at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:600)
 at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:683)
 at org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:210)
 at org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:236)
 at 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:181)
 at 
org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:148)
 at 
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27399)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1687)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682){color}

Restarting NN1 with -upgrade option seems to keep the journalnode directory 
from being released after testUpgradeCommand.
{code:java}
// Start NN1 with -upgrade option
dfsCluster.getNameNodeInfos()[0].setStartOpt(
HdfsServerConstants.StartupOption.UPGRADE);
dfsCluster.restartNameNode(0, true);
{code}


> testUpgradeCommand fails following tests in TestDFSAdminWithHA on Windows
> -
>
> Key: HDFS-13630
> URL: https://issues.apache.org/jira/browse/HDFS-13630
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Priority: Minor
>
> 32 tests in 
> [TestDFSAdminWithHA|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdminWithHA/]
>  fail on Windows after testUpgradeCommand with error message:
> Could not format one or more JournalNodes. 1 exceptions thrown:
> {color:#d04437}127.0.0.1:58098: Directory 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\journalnode-0\ns1
>  is in an inconsistent state: Can't format the storage directory because the 
> current directory is not empty.
>  at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:600)
>  at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:683)
>  at 
> org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:210)
>  at org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:236)
>  at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:181)
>  at 
> org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:148)
>  at 
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27399)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1687)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682){color}
> Restarting NN1 with -upgrade option seems to keep the journalnode directory 
> from being released after testUpgradeCommand.
> {code:java}
> // Start NN1 with -upgrade option
> dfsCluster.getNameNodeInfos()[0].setStartOpt(
> HdfsServerConstants.StartupOption.UPGRADE);
> dfsCluster.restartNameNode(0, true);
> 

[jira] [Created] (HDFS-13630) testUpgradeCommand fails following tests in TestDFSAdminWithHA on Windows

2018-05-28 Thread Anbang Hu (JIRA)
Anbang Hu created HDFS-13630:


 Summary: testUpgradeCommand fails following tests in 
TestDFSAdminWithHA on Windows
 Key: HDFS-13630
 URL: https://issues.apache.org/jira/browse/HDFS-13630
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Anbang Hu






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13563) TestDFSAdminWithHA times out on Windows

2018-05-28 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492972#comment-16492972
 ] 

Íñigo Goiri commented on HDFS-13563:


Currently the [daily Windows 
build|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/junit/org.apache.hadoop.hdfs.tools/TestDFSAdminWithHA/]
 has 36 failures because of this.
However, this is caused by the 4 tests tweaked in [^HDFS-13563.001.patch].
I would use a randomized path for each test and then we can focus on why these 
4 unit tests are timing out.
[~huanbang1993] can you open a separate JIRA to randomize the MiniDFSCluster?

> TestDFSAdminWithHA times out on Windows
> ---
>
> Key: HDFS-13563
> URL: https://issues.apache.org/jira/browse/HDFS-13563
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13563.000.patch, HDFS-13563.001.patch
>
>
> {color:#33}[Daily Windows 
> build|https://builds.apache.org/job/hadoop-trunk-win/467/testReport/] shows 
> TestDFSAdminWithHA has 4 timeout tests with "{color}test timed out after 
> 3 milliseconds{color:#33}"{color}
> {code:java}
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshUserToGroupsMappingsNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshServiceAclNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshCallQueueNN1DownNN2Down
> org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testRefreshSuperUserGroupsConfigurationNN1DownNN2Down
> {code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13629) Some tests in TestDiskBalancerCommand fail on Windows due to MiniDFSCluster path conflict and improper path usage

2018-05-28 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492969#comment-16492969
 ] 

genericqa commented on HDFS-13629:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}157m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy 
|
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13629 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925467/HDFS-13629.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7fc1369c3e36 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 91d7c74 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24318/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24318/testReport/ |
| Max. process+thread count | 3054 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-13591) TestDFSShell#testSetrepLow fails on Windows

2018-05-28 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492968#comment-16492968
 ] 

Hudson commented on HDFS-13591:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14304 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14304/])
HDFS-13591. TestDFSShell#testSetrepLow fails on Windows. Contributed by 
(inigoiri: rev 9dbf4f01665d5480a70395a24519cbab5d4db0c5)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java


> TestDFSShell#testSetrepLow fails on Windows
> ---
>
> Key: HDFS-13591
> URL: https://issues.apache.org/jira/browse/HDFS-13591
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13591.000.patch, HDFS-13591.001.patch, 
> HDFS-13591.002.patch
>
>
> https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs/TestDFSShell/testSetrepLow/
>  shows
> {code:java}
> Error message is not the expected error message 
> expected:<...testFileForSetrepLow[]
> > but was:<...testFileForSetrepLow[
> ]
> >
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13591) TestDFSShell#testSetrepLow fails on Windows

2018-05-28 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13591:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.4
   2.9.2
   3.1.1
   3.2.0
   2.10.0
   Status: Resolved  (was: Patch Available)

Thanks [~huanbang1993] for the fix and [~lukmajercak] for the suggestion.
Committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9.

> TestDFSShell#testSetrepLow fails on Windows
> ---
>
> Key: HDFS-13591
> URL: https://issues.apache.org/jira/browse/HDFS-13591
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13591.000.patch, HDFS-13591.001.patch, 
> HDFS-13591.002.patch
>
>
> https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs/TestDFSShell/testSetrepLow/
>  shows
> {code:java}
> Error message is not the expected error message 
> expected:<...testFileForSetrepLow[]
> > but was:<...testFileForSetrepLow[
> ]
> >
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13629) Some tests in TestDiskBalancerCommand fail on Windows due to MiniDFSCluster path conflict and improper path usage

2018-05-28 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492963#comment-16492963
 ] 

Íñigo Goiri commented on HDFS-13629:


[~surmountian], do you mind taking a look at the fix?

> Some tests in TestDiskBalancerCommand fail on Windows due to MiniDFSCluster 
> path conflict and improper path usage
> -
>
> Key: HDFS-13629
> URL: https://issues.apache.org/jira/browse/HDFS-13629
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13629.000.patch
>
>
> The following fail due to MiniDFSCluster path conflict:
> * 
> [testDiskBalancerForceExecute|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerForceExecute/]
> * 
> [testDiskBalancerExecuteOptionPlanValidityWithException|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerExecuteOptionPlanValidityWithException/]
> * 
> [testDiskBalancerQueryWithoutSubmit|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerQueryWithoutSubmit/]
> * 
> [testDiskBalancerExecuteOptionPlanValidity|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerExecuteOptionPlanValidity/]
> * 
> [testRunMultipleCommandsUnderOneSetup|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testRunMultipleCommandsUnderOneSetup/]
> * 
> [testDiskBalancerExecutePlanValidityWithOutUnitException|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerExecutePlanValidityWithOutUnitException/]
> * 
> [testSubmitPlanInNonRegularStatus|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testSubmitPlanInNonRegularStatus/]
> * 
> [testPrintFullPathOfPlan|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testPrintFullPathOfPlan/]
> The following fails due to improper path usage:
> * 
> [testReportNodeWithoutJson|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testReportNodeWithoutJson/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13629) Some tests in TestDiskBalancerCommand fail on Windows due to MiniDFSCluster path conflict and improper path usage

2018-05-28 Thread Anbang Hu (JIRA)
Anbang Hu created HDFS-13629:


 Summary: Some tests in TestDiskBalancerCommand fail on Windows due 
to MiniDFSCluster path conflict and improper path usage
 Key: HDFS-13629
 URL: https://issues.apache.org/jira/browse/HDFS-13629
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Anbang Hu
Assignee: Anbang Hu






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13591) TestDFSShell#testSetrepLow fails on Windows

2018-05-28 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492924#comment-16492924
 ] 

genericqa commented on HDFS-13591:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}159m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13591 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925451/HDFS-13591.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3ca9e62f99c1 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 91d7c74 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24317/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24317/testReport/ |
| Max. process+thread count | 3158 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24317/console |
| Powered 

[jira] [Commented] (HDFS-13591) TestDFSShell#testSetrepLow fails on Windows

2018-05-28 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492958#comment-16492958
 ] 

Íñigo Goiri commented on HDFS-13591:


Handling the \n\r combinations in these cases is hard to get.
I think this is a reasonable way to solve the issue.

The failed unit tests aren't related.
+1 on  [^HDFS-13591.002.patch].
Committing.

> TestDFSShell#testSetrepLow fails on Windows
> ---
>
> Key: HDFS-13591
> URL: https://issues.apache.org/jira/browse/HDFS-13591
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13591.000.patch, HDFS-13591.001.patch, 
> HDFS-13591.002.patch
>
>
> https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs/TestDFSShell/testSetrepLow/
>  shows
> {code:java}
> Error message is not the expected error message 
> expected:<...testFileForSetrepLow[]
> > but was:<...testFileForSetrepLow[
> ]
> >
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13629) Some tests in TestDiskBalancerCommand fail on Windows due to MiniDFSCluster path conflict and improper path usage

2018-05-28 Thread Anbang Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13629:
-
Description: 
The following fail due to MiniDFSCluster path conflict:
* 
[testDiskBalancerForceExecute|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerForceExecute/]
* 
[testDiskBalancerExecuteOptionPlanValidityWithException|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerExecuteOptionPlanValidityWithException/]
* 
[testDiskBalancerQueryWithoutSubmit|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerQueryWithoutSubmit/]
* 
[testDiskBalancerExecuteOptionPlanValidity|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerExecuteOptionPlanValidity/]
* 
[testRunMultipleCommandsUnderOneSetup|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testRunMultipleCommandsUnderOneSetup/]
* 
[testDiskBalancerExecutePlanValidityWithOutUnitException|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerExecutePlanValidityWithOutUnitException/]
* 
[testSubmitPlanInNonRegularStatus|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testSubmitPlanInNonRegularStatus/]
* 
[testPrintFullPathOfPlan|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testPrintFullPathOfPlan/]

The following fails due to improper path usage:
* 
[testReportNodeWithoutJson|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testReportNodeWithoutJson/]

> Some tests in TestDiskBalancerCommand fail on Windows due to MiniDFSCluster 
> path conflict and improper path usage
> -
>
> Key: HDFS-13629
> URL: https://issues.apache.org/jira/browse/HDFS-13629
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
>
> The following fail due to MiniDFSCluster path conflict:
> * 
> [testDiskBalancerForceExecute|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerForceExecute/]
> * 
> [testDiskBalancerExecuteOptionPlanValidityWithException|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerExecuteOptionPlanValidityWithException/]
> * 
> [testDiskBalancerQueryWithoutSubmit|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerQueryWithoutSubmit/]
> * 
> [testDiskBalancerExecuteOptionPlanValidity|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerExecuteOptionPlanValidity/]
> * 
> [testRunMultipleCommandsUnderOneSetup|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testRunMultipleCommandsUnderOneSetup/]
> * 
> [testDiskBalancerExecutePlanValidityWithOutUnitException|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerExecutePlanValidityWithOutUnitException/]
> * 
> [testSubmitPlanInNonRegularStatus|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testSubmitPlanInNonRegularStatus/]
> * 
> [testPrintFullPathOfPlan|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testPrintFullPathOfPlan/]
> The following fails due to improper path usage:
> * 
> [testReportNodeWithoutJson|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testReportNodeWithoutJson/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HDFS-13629) Some tests in TestDiskBalancerCommand fail on Windows due to MiniDFSCluster path conflict and improper path usage

2018-05-28 Thread Anbang Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13629:
-
Attachment: HDFS-13629.000.patch
Status: Patch Available  (was: Open)

> Some tests in TestDiskBalancerCommand fail on Windows due to MiniDFSCluster 
> path conflict and improper path usage
> -
>
> Key: HDFS-13629
> URL: https://issues.apache.org/jira/browse/HDFS-13629
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13629.000.patch
>
>
> The following fail due to MiniDFSCluster path conflict:
> * 
> [testDiskBalancerForceExecute|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerForceExecute/]
> * 
> [testDiskBalancerExecuteOptionPlanValidityWithException|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerExecuteOptionPlanValidityWithException/]
> * 
> [testDiskBalancerQueryWithoutSubmit|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerQueryWithoutSubmit/]
> * 
> [testDiskBalancerExecuteOptionPlanValidity|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerExecuteOptionPlanValidity/]
> * 
> [testRunMultipleCommandsUnderOneSetup|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testRunMultipleCommandsUnderOneSetup/]
> * 
> [testDiskBalancerExecutePlanValidityWithOutUnitException|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testDiskBalancerExecutePlanValidityWithOutUnitException/]
> * 
> [testSubmitPlanInNonRegularStatus|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testSubmitPlanInNonRegularStatus/]
> * 
> [testPrintFullPathOfPlan|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testPrintFullPathOfPlan/]
> The following fails due to improper path usage:
> * 
> [testReportNodeWithoutJson|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs.server.diskbalancer.command/TestDiskBalancerCommand/testReportNodeWithoutJson/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-130) TestGenerateOzoneRequiredConfigurations should use GenericTestUtils#getTempPath as output directory

2018-05-28 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-130:


 Summary: TestGenerateOzoneRequiredConfigurations should use 
GenericTestUtils#getTempPath as output directory
 Key: HDDS-130
 URL: https://issues.apache.org/jira/browse/HDDS-130
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Nanda kumar


{{TestGenerateOzoneRequiredConfigurations}} uses current directory (.) as its 
output location which generates {{ozone-site.xml}} file in the directory from 
where the test-cases is executed. Insead we should use 
{{GenericTestUtils#getTempPath}} as output directory for test-cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-81) Moving ContainerReport inside Datanode heartbeat

2018-05-28 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-81?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492937#comment-16492937
 ] 

genericqa commented on HDDS-81:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 36m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
47s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 29m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 19s{color} | {color:orange} root: The patch generated 5 new + 34 unchanged - 
3 fixed = 39 total (was 37) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
50s{color} | {color:red} hadoop-hdds/server-scm generated 2 new + 0 unchanged - 
1 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m  9s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 44s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
45s{color} | {color:red} The patch generated 11 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}165m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/server-scm |
|  |  Nullcheck of datanodeDetails at line 818 

[jira] [Updated] (HDDS-128) Support for DeltaContainerReport

2018-05-28 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-128:
-
Attachment: HDDS-128.000.patch

> Support for DeltaContainerReport
> 
>
> Key: HDDS-128
> URL: https://issues.apache.org/jira/browse/HDDS-128
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode, SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-128.000.patch
>
>
> Whenever a container reaches its configured upper limit, datanode informs SCM 
> about those containers through {{DeltaContainerReport}}. This jira adds 
> support for {{DeltaContainerReport}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-130) TestGenerateOzoneRequiredConfigurations should use GenericTestUtils#getTempPath to set output directory

2018-05-28 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-130:
-
Component/s: Tools

> TestGenerateOzoneRequiredConfigurations should use 
> GenericTestUtils#getTempPath to set output directory
> ---
>
> Key: HDDS-130
> URL: https://issues.apache.org/jira/browse/HDDS-130
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Tools
>Reporter: Nanda kumar
>Priority: Minor
>  Labels: newbie
>
> {{TestGenerateOzoneRequiredConfigurations}} uses current directory (.) as its 
> output location which generates {{ozone-site.xml}} file in the directory from 
> where the test-cases is executed. Insead we should use 
> {{GenericTestUtils#getTempPath}} to get the output directory for test-cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-130) TestGenerateOzoneRequiredConfigurations should use GenericTestUtils#getTempPath to set output directory

2018-05-28 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-130:
-
Summary: TestGenerateOzoneRequiredConfigurations should use 
GenericTestUtils#getTempPath to set output directory  (was: 
TestGenerateOzoneRequiredConfigurations should use GenericTestUtils#getTempPath 
as output directory)

> TestGenerateOzoneRequiredConfigurations should use 
> GenericTestUtils#getTempPath to set output directory
> ---
>
> Key: HDDS-130
> URL: https://issues.apache.org/jira/browse/HDDS-130
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Priority: Minor
>  Labels: newbie
>
> {{TestGenerateOzoneRequiredConfigurations}} uses current directory (.) as its 
> output location which generates {{ozone-site.xml}} file in the directory from 
> where the test-cases is executed. Insead we should use 
> {{GenericTestUtils#getTempPath}} as output directory for test-cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-130) TestGenerateOzoneRequiredConfigurations should use GenericTestUtils#getTempPath to set output directory

2018-05-28 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-130:
-
Description: {{TestGenerateOzoneRequiredConfigurations}} uses current 
directory (.) as its output location which generates {{ozone-site.xml}} file in 
the directory from where the test-cases is executed. Insead we should use 
{{GenericTestUtils#getTempPath}} to get the output directory for test-cases.  
(was: {{TestGenerateOzoneRequiredConfigurations}} uses current directory (.) as 
its output location which generates {{ozone-site.xml}} file in the directory 
from where the test-cases is executed. Insead we should use 
{{GenericTestUtils#getTempPath}} as output directory for test-cases.)

> TestGenerateOzoneRequiredConfigurations should use 
> GenericTestUtils#getTempPath to set output directory
> ---
>
> Key: HDDS-130
> URL: https://issues.apache.org/jira/browse/HDDS-130
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Priority: Minor
>  Labels: newbie
>
> {{TestGenerateOzoneRequiredConfigurations}} uses current directory (.) as its 
> output location which generates {{ozone-site.xml}} file in the directory from 
> where the test-cases is executed. Insead we should use 
> {{GenericTestUtils#getTempPath}} to get the output directory for test-cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-81) Moving ContainerReport inside Datanode heartbeat

2018-05-28 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-81?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-81:

Attachment: HDDS-81.001.patch

> Moving ContainerReport inside Datanode heartbeat
> 
>
> Key: HDDS-81
> URL: https://issues.apache.org/jira/browse/HDDS-81
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-81.000.patch, HDDS-81.001.patch
>
>
> {{sendContainerReport}} is a separate RPC call now, as part of heartbeat 
> refactoring ContainerReport will be moved into heartbeat.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13627) TestErasureCodingExerciseAPIs fails on Windows

2018-05-28 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492860#comment-16492860
 ] 

Hudson commented on HDFS-13627:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14302 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14302/])
HDFS-13627. TestErasureCodingExerciseAPIs fails on Windows. Contributed 
(inigoiri: rev 91d7c74e6aa4850922f68bab490b585443e4fccb)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingExerciseAPIs.java


> TestErasureCodingExerciseAPIs fails on Windows
> --
>
> Key: HDFS-13627
> URL: https://issues.apache.org/jira/browse/HDFS-13627
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 3.0.2
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HDFS-13627.000.patch
>
>
> All tests in 
> [TestErasureCodingExerciseAPIs|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs/TestErasureCodingExerciseAPIs/]
>  fails with error message:
> {color:#d04437}No FileSystem for scheme "filetarget"{color}
> This is caused by improper Path usage on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13591) TestDFSShell#testSetrepLow fails on Windows

2018-05-28 Thread Anbang Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13591:
-
Attachment: HDFS-13591.002.patch

> TestDFSShell#testSetrepLow fails on Windows
> ---
>
> Key: HDFS-13591
> URL: https://issues.apache.org/jira/browse/HDFS-13591
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13591.000.patch, HDFS-13591.001.patch, 
> HDFS-13591.002.patch
>
>
> https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs/TestDFSShell/testSetrepLow/
>  shows
> {code:java}
> Error message is not the expected error message 
> expected:<...testFileForSetrepLow[]
> > but was:<...testFileForSetrepLow[
> ]
> >
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13591) TestDFSShell#testSetrepLow fails on Windows

2018-05-28 Thread Anbang Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492877#comment-16492877
 ] 

Anbang Hu commented on HDFS-13591:
--

Thanks [~elgoiri] for the review and suggestion. New patch  
[^HDFS-13591.002.patch] is uploaded.

> TestDFSShell#testSetrepLow fails on Windows
> ---
>
> Key: HDFS-13591
> URL: https://issues.apache.org/jira/browse/HDFS-13591
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13591.000.patch, HDFS-13591.001.patch, 
> HDFS-13591.002.patch
>
>
> https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs/TestDFSShell/testSetrepLow/
>  shows
> {code:java}
> Error message is not the expected error message 
> expected:<...testFileForSetrepLow[]
> > but was:<...testFileForSetrepLow[
> ]
> >
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-129) Move NodeReport, ContainerReport and DeltaContainerReport into DatanodeReport

2018-05-28 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-129:


 Summary: Move NodeReport, ContainerReport and DeltaContainerReport 
into DatanodeReport
 Key: HDDS-129
 URL: https://issues.apache.org/jira/browse/HDDS-129
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Datanode
Reporter: Nanda kumar
Assignee: Nanda kumar


Since {{NodeReport}}, {{ContainerReport}} and {{DeltaContainerReport}} are 
reports from Datanode it will become easy for us to have them under 
{{DatanodeReport}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13591) TestDFSShell#testSetrepLow fails on Windows

2018-05-28 Thread Anbang Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492861#comment-16492861
 ] 

Anbang Hu commented on HDFS-13591:
--

bao.toString() will contain "\r\r\n" on Windows. Not sure it's a good way to 
assert that the expected message includes bao.toString().

> TestDFSShell#testSetrepLow fails on Windows
> ---
>
> Key: HDFS-13591
> URL: https://issues.apache.org/jira/browse/HDFS-13591
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13591.000.patch, HDFS-13591.001.patch
>
>
> https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs/TestDFSShell/testSetrepLow/
>  shows
> {code:java}
> Error message is not the expected error message 
> expected:<...testFileForSetrepLow[]
> > but was:<...testFileForSetrepLow[
> ]
> >
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13591) TestDFSShell#testSetrepLow fails on Windows

2018-05-28 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492872#comment-16492872
 ] 

Íñigo Goiri commented on HDFS-13591:


I mean something like this:
{code}
assertTrue("Error message does not contain the expected error message: " + 
bao.toString(),
bao.toString().startsWith(
  "setrep: Requested replication factor of 1 is less than "
 + "the required minimum of 2 for /tmp/TestDFSShell-"
 + "testSetrepLow/testFileForSetrepLow"));
{code}

> TestDFSShell#testSetrepLow fails on Windows
> ---
>
> Key: HDFS-13591
> URL: https://issues.apache.org/jira/browse/HDFS-13591
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13591.000.patch, HDFS-13591.001.patch
>
>
> https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs/TestDFSShell/testSetrepLow/
>  shows
> {code:java}
> Error message is not the expected error message 
> expected:<...testFileForSetrepLow[]
> > but was:<...testFileForSetrepLow[
> ]
> >
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13591) TestDFSShell#testSetrepLow fails on Windows

2018-05-28 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492846#comment-16492846
 ] 

Íñigo Goiri commented on HDFS-13591:


I would include {{bao.toString()}} on the message.

> TestDFSShell#testSetrepLow fails on Windows
> ---
>
> Key: HDFS-13591
> URL: https://issues.apache.org/jira/browse/HDFS-13591
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13591.000.patch, HDFS-13591.001.patch
>
>
> https://builds.apache.org/job/hadoop-trunk-win/469/testReport/org.apache.hadoop.hdfs/TestDFSShell/testSetrepLow/
>  shows
> {code:java}
> Error message is not the expected error message 
> expected:<...testFileForSetrepLow[]
> > but was:<...testFileForSetrepLow[
> ]
> >
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13627) TestErasureCodingExerciseAPIs fails on Windows

2018-05-28 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13627:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.1
   3.2.0
   Status: Resolved  (was: Patch Available)

Thanks [~huanbang1993] for the fix.
Committed to trunk and branch-3.1.

> TestErasureCodingExerciseAPIs fails on Windows
> --
>
> Key: HDFS-13627
> URL: https://issues.apache.org/jira/browse/HDFS-13627
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 3.0.2
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HDFS-13627.000.patch
>
>
> All tests in 
> [TestErasureCodingExerciseAPIs|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs/TestErasureCodingExerciseAPIs/]
>  fails with error message:
> {color:#d04437}No FileSystem for scheme "filetarget"{color}
> This is caused by improper Path usage on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13627) TestErasureCodingExerciseAPIs fails on Windows

2018-05-28 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13627:
---
Affects Version/s: (was: 3.1.0)

> TestErasureCodingExerciseAPIs fails on Windows
> --
>
> Key: HDFS-13627
> URL: https://issues.apache.org/jira/browse/HDFS-13627
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 3.0.2
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13627.000.patch
>
>
> All tests in 
> [TestErasureCodingExerciseAPIs|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs/TestErasureCodingExerciseAPIs/]
>  fails with error message:
> {color:#d04437}No FileSystem for scheme "filetarget"{color}
> This is caused by improper Path usage on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13627) TestErasureCodingExerciseAPIs fails on Windows

2018-05-28 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13627:
---
Affects Version/s: 3.1.0
   3.0.2

> TestErasureCodingExerciseAPIs fails on Windows
> --
>
> Key: HDFS-13627
> URL: https://issues.apache.org/jira/browse/HDFS-13627
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 3.1.0, 3.0.2
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13627.000.patch
>
>
> All tests in 
> [TestErasureCodingExerciseAPIs|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs/TestErasureCodingExerciseAPIs/]
>  fails with error message:
> {color:#d04437}No FileSystem for scheme "filetarget"{color}
> This is caused by improper Path usage on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13627) TestErasureCodingExerciseAPIs fails on Windows

2018-05-28 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492842#comment-16492842
 ] 

Íñigo Goiri commented on HDFS-13627:


One can see this in the [daily Windows 
build|https://builds.apache.org/job/hadoop-trunk-win/476/testReport/org.apache.hadoop.hdfs/TestErasureCodingExerciseAPIs/].
+1 on  [^HDFS-13627.000.patch].
Committing.

> TestErasureCodingExerciseAPIs fails on Windows
> --
>
> Key: HDFS-13627
> URL: https://issues.apache.org/jira/browse/HDFS-13627
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13627.000.patch
>
>
> All tests in 
> [TestErasureCodingExerciseAPIs|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs/TestErasureCodingExerciseAPIs/]
>  fails with error message:
> {color:#d04437}No FileSystem for scheme "filetarget"{color}
> This is caused by improper Path usage on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-128) Support for DeltaContainerReport

2018-05-28 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-128:


 Summary: Support for DeltaContainerReport
 Key: HDDS-128
 URL: https://issues.apache.org/jira/browse/HDDS-128
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Datanode, SCM
Reporter: Nanda kumar
Assignee: Nanda kumar


Whenever a container reaches its configured upper limit, datanode informs SCM 
about those containers through {{DeltaContainerReport}}. This jira adds support 
for {{DeltaContainerReport}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13339) Volume reference can't be released and leads to deadlock when DataXceiver does a check volume

2018-05-28 Thread Zsolt Venczel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492808#comment-16492808
 ] 

Zsolt Venczel commented on HDFS-13339:
--

Thank you very much [~xiaochen] for taking a look!

The TestDatasetVolumeChecker was failing because checkVolume call is behaving 
truly asynchronously now. I've updated the test to support this behavior.

The above two tests are passing for me locally:
{code}
[INFO] Running 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 112.129 
s - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
[INFO] Running 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 126.407 
s - in org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 15, Failures: 0, Errors: 0, Skipped: 0
{code}

> Volume reference can't be released and leads to deadlock when DataXceiver 
> does a check volume
> -
>
> Key: HDFS-13339
> URL: https://issues.apache.org/jira/browse/HDFS-13339
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
> Environment: os: Linux 2.6.32-358.el6.x86_64
> hadoop version: hadoop-3.2.0-SNAPSHOT
> unit: mvn test -Pnative 
> -Dtest=TestDataNodeVolumeFailureReporting#testVolFailureStatsPreservedOnNNRestart
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
>Priority: Critical
>  Labels: DataNode, volumes
> Attachments: HDFS-13339.001.patch, HDFS-13339.002.patch, 
> HDFS-13339.003.patch
>
>
> When i execute Unit Test of
>  TestDataNodeVolumeFailureReporting#testVolFailureStatsPreservedOnNNRestart, 
> the process blocks on waitReplication, detail information as follows:
> [INFO] ---
>  [INFO] T E S T S
>  [INFO] ---
>  [INFO] Running 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
>  [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 307.492 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
>  [ERROR] 
> testVolFailureStatsPreservedOnNNRestart(org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting)
>  Time elapsed: 307.206 s <<< ERROR!
>  java.util.concurrent.TimeoutException: Timed out waiting for /test1 to reach 
> 2 replicas
>  at org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:800)
>  at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting.testVolFailureStatsPreservedOnNNRestart(TestDataNodeVolumeFailureReporting.java:283)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13339) Volume reference can't be released and leads to deadlock when DataXceiver does a check volume

2018-05-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492778#comment-16492778
 ] 

genericqa commented on HDFS-13339:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}165m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13339 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925411/HDFS-13339.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 324af8f79aa7 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7c34366 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24315/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24315/testReport/ |
| Max. process+thread count | 2985 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-13511) Provide specialized exception when block length cannot be obtained

2018-05-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492765#comment-16492765
 ] 

genericqa commented on HDFS-13511:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
1s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
46s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13511 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925433/HDFS-13511.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2d93788cffd1 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7c34366 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24316/testReport/ |
| Max. process+thread count | 335 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24316/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Provide specialized exception 

[jira] [Updated] (HDFS-13511) Provide specialized exception when block length cannot be obtained

2018-05-28 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HDFS-13511:
--
Status: Patch Available  (was: Open)

> Provide specialized exception when block length cannot be obtained
> --
>
> Key: HDFS-13511
> URL: https://issues.apache.org/jira/browse/HDFS-13511
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HDFS-13511.001.patch
>
>
> In downstream project, I saw the following code:
> {code}
> FSDataInputStream inputStream = hdfs.open(new Path(path));
> ...
> if (options.getRecoverFailedOpen() && dfs != null && 
> e.getMessage().toLowerCase()
> .startsWith("cannot obtain block length for")) {
> {code}
> The above tightly depends on the following in DFSInputStream#readBlockLength
> {code}
> throw new IOException("Cannot obtain block length for " + locatedblock);
> {code}
> The check based on string matching is brittle in production deployment.
> After discussing with [~ste...@apache.org], better approach is to introduce 
> specialized IOException, e.g. CannotObtainBlockLengthException so that 
> downstream project doesn't have to rely on string matching.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13511) Provide specialized exception when block length cannot be obtained

2018-05-28 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HDFS-13511:
--
Attachment: HDFS-13511.001.patch

> Provide specialized exception when block length cannot be obtained
> --
>
> Key: HDFS-13511
> URL: https://issues.apache.org/jira/browse/HDFS-13511
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HDFS-13511.001.patch
>
>
> In downstream project, I saw the following code:
> {code}
> FSDataInputStream inputStream = hdfs.open(new Path(path));
> ...
> if (options.getRecoverFailedOpen() && dfs != null && 
> e.getMessage().toLowerCase()
> .startsWith("cannot obtain block length for")) {
> {code}
> The above tightly depends on the following in DFSInputStream#readBlockLength
> {code}
> throw new IOException("Cannot obtain block length for " + locatedblock);
> {code}
> The check based on string matching is brittle in production deployment.
> After discussing with [~ste...@apache.org], better approach is to introduce 
> specialized IOException, e.g. CannotObtainBlockLengthException so that 
> downstream project doesn't have to rely on string matching.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13583) RBF: Router admin clrQuota is not synchronized with nameservice

2018-05-28 Thread Dibyendu Karmakar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492676#comment-16492676
 ] 

Dibyendu Karmakar commented on HDFS-13583:
--

Added the patch. [~linyiqun] could you please take a look.

> RBF: Router admin clrQuota is not synchronized with nameservice
> ---
>
> Key: HDFS-13583
> URL: https://issues.apache.org/jira/browse/HDFS-13583
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Major
> Attachments: HDFS-13583-000.patch
>
>
> Router admin -clrQuota command is removing the quota from the mount table 
> only, it is not getting synchronized with nameservice.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13583) RBF: Router admin clrQuota is not synchronized with nameservice

2018-05-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492655#comment-16492655
 ] 

genericqa commented on HDFS-13583:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
59s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13583 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925402/HDFS-13583-000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c67ce258d646 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7c34366 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24314/testReport/ |
| Max. process+thread count | 954 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24314/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Router admin clrQuota is not synchronized with nameservice
> ---
>
> Key: HDFS-13583
> 

[jira] [Created] (HDDS-127) Add CloseOpenPipelines and CloseContainerEventHandler in SCM

2018-05-28 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-127:


 Summary: Add CloseOpenPipelines and CloseContainerEventHandler in 
SCM
 Key: HDDS-127
 URL: https://issues.apache.org/jira/browse/HDDS-127
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.2.1


When a node fails or node is out of space, all the pipelines which has this 
particular datanode id,

need to be removed from the active pipelines list. Moreover, all open 
containers residing on the datanode need to be closed on all the other 
datanodes as well as the State in SCM container state manager needs to be 
updated to maintain consistency. This Jira aims to add the required event 
handlers .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13339) Volume reference can't be released and leads to deadlock when DataXceiver does a check volume

2018-05-28 Thread Zsolt Venczel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HDFS-13339:
-
Attachment: HDFS-13339.003.patch

> Volume reference can't be released and leads to deadlock when DataXceiver 
> does a check volume
> -
>
> Key: HDFS-13339
> URL: https://issues.apache.org/jira/browse/HDFS-13339
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
> Environment: os: Linux 2.6.32-358.el6.x86_64
> hadoop version: hadoop-3.2.0-SNAPSHOT
> unit: mvn test -Pnative 
> -Dtest=TestDataNodeVolumeFailureReporting#testVolFailureStatsPreservedOnNNRestart
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
>Priority: Critical
>  Labels: DataNode, volumes
> Attachments: HDFS-13339.001.patch, HDFS-13339.002.patch, 
> HDFS-13339.003.patch
>
>
> When i execute Unit Test of
>  TestDataNodeVolumeFailureReporting#testVolFailureStatsPreservedOnNNRestart, 
> the process blocks on waitReplication, detail information as follows:
> [INFO] ---
>  [INFO] T E S T S
>  [INFO] ---
>  [INFO] Running 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
>  [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 307.492 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
>  [ERROR] 
> testVolFailureStatsPreservedOnNNRestart(org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting)
>  Time elapsed: 307.206 s <<< ERROR!
>  java.util.concurrent.TimeoutException: Timed out waiting for /test1 to reach 
> 2 replicas
>  at org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:800)
>  at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting.testVolFailureStatsPreservedOnNNRestart(TestDataNodeVolumeFailureReporting.java:283)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-108) Update Node2ContainerMap while processing container reports

2018-05-28 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492593#comment-16492593
 ] 

Mukul Kumar Singh commented on HDDS-108:


Thanks for working on this [~shashikant]. The patch looks really good to me.

1) Should ContainerSupervisor.java be removed in this patch as well ?
2) SCMException.java:120, can INVALID_CONTAINER_REPORT_PROCESSING_RESULT be 
renamed to something like "INVALID_PROCESSED_CONTAINER_REPORT"
3) SCMNodeManager.java:29: remove the unused import
4) SCMNodeManager.java:73: please convert the wildcard imports to individual 
classes.
5) StorageContainerManager:201: addHandler is not being used currently in the 
code, please remove this function
6) TestContainerSupervisor:62, Please remove the wildcards here in this file.

> Update Node2ContainerMap while processing container reports
> ---
>
> Key: HDDS-108
> URL: https://issues.apache.org/jira/browse/HDDS-108
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-108.00.patch, HDDS-108.01.patch
>
>
> When the container report comes, the Node2Container Map should be updated via 
> SCMContainerManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-81) Moving ContainerReport inside Datanode heartbeat

2018-05-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-81?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492589#comment-16492589
 ] 

genericqa commented on HDDS-81:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDDS-81 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-81 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925403/HDDS-81.000.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/200/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Moving ContainerReport inside Datanode heartbeat
> 
>
> Key: HDDS-81
> URL: https://issues.apache.org/jira/browse/HDDS-81
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-81.000.patch
>
>
> {{sendContainerReport}} is a separate RPC call now, as part of heartbeat 
> refactoring ContainerReport will be moved into heartbeat.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-81) Moving ContainerReport inside Datanode heartbeat

2018-05-28 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-81?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-81:

Status: Patch Available  (was: Open)

> Moving ContainerReport inside Datanode heartbeat
> 
>
> Key: HDDS-81
> URL: https://issues.apache.org/jira/browse/HDDS-81
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-81.000.patch
>
>
> {{sendContainerReport}} is a separate RPC call now, as part of heartbeat 
> refactoring ContainerReport will be moved into heartbeat.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-81) Moving ContainerReport inside Datanode heartbeat

2018-05-28 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-81?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492588#comment-16492588
 ] 

Nanda kumar commented on HDDS-81:
-

Acceptance Test Results
{noformat}
==
Acceptance
==
Acceptance.Ozone-Shell :: Smoke test to start cluster with docker-compose e...
==
Daemons are running without error | PASS |
--
Check if datanode is connected to the scm | PASS |
--
Scale it up to 5 datanodes| PASS |
--
Test ozone shell (RestClient without http port)   | PASS |
--
Test ozone shell (RestClient with http port)  | PASS |
--
Test ozone shell (RestClient without hostname)| PASS |
--
Test ozone shell (RpcClient without http port)| PASS |
--
Test ozone shell (RpcClient with http port)   | PASS |
--
Test ozone shell (RpcClient without hostname) | PASS |
--
Acceptance.Ozone-Shell :: Smoke test to start cluster with docker-... | PASS |
9 critical tests, 9 passed, 0 failed
9 tests total, 9 passed, 0 failed
==
Acceptance.Ozone :: Smoke test to start cluster with docker-compose environ...
==
Daemons are running without error | PASS |
--
Check if datanode is connected to the scm | PASS |
--
Scale it up to 5 datanodes| PASS |
--
Test rest interface   | PASS |
--
Check webui static resources  | PASS |
--
Start freon testing   | PASS |
--
Acceptance.Ozone :: Smoke test to start cluster with docker-compos... | PASS |
6 critical tests, 6 passed, 0 failed
6 tests total, 6 passed, 0 failed
==
Acceptance| PASS |
15 critical tests, 15 passed, 0 failed
15 tests total, 15 passed, 0 failed
==
{noformat}

> Moving ContainerReport inside Datanode heartbeat
> 
>
> Key: HDDS-81
> URL: https://issues.apache.org/jira/browse/HDDS-81
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-81.000.patch
>
>
> {{sendContainerReport}} is a separate RPC call now, as part of heartbeat 
> refactoring ContainerReport will be moved into heartbeat.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-81) Moving ContainerReport inside Datanode heartbeat

2018-05-28 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-81?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-81:

Attachment: HDDS-81.000.patch

> Moving ContainerReport inside Datanode heartbeat
> 
>
> Key: HDDS-81
> URL: https://issues.apache.org/jira/browse/HDDS-81
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-81.000.patch
>
>
> {{sendContainerReport}} is a separate RPC call now, as part of heartbeat 
> refactoring ContainerReport will be moved into heartbeat.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13583) RBF: Router admin clrQuota is not synchronized with nameservice

2018-05-28 Thread Dibyendu Karmakar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dibyendu Karmakar updated HDFS-13583:
-
Status: Patch Available  (was: Open)

> RBF: Router admin clrQuota is not synchronized with nameservice
> ---
>
> Key: HDFS-13583
> URL: https://issues.apache.org/jira/browse/HDFS-13583
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Major
> Attachments: HDFS-13583-000.patch
>
>
> Router admin -clrQuota command is removing the quota from the mount table 
> only, it is not getting synchronized with nameservice.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13583) RBF: Router admin clrQuota is not synchronized with nameservice

2018-05-28 Thread Dibyendu Karmakar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dibyendu Karmakar updated HDFS-13583:
-
Attachment: HDFS-13583-000.patch

> RBF: Router admin clrQuota is not synchronized with nameservice
> ---
>
> Key: HDFS-13583
> URL: https://issues.apache.org/jira/browse/HDFS-13583
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Major
> Attachments: HDFS-13583-000.patch
>
>
> Router admin -clrQuota command is removing the quota from the mount table 
> only, it is not getting synchronized with nameservice.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13121) NPE when request file descriptors when SC read

2018-05-28 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492556#comment-16492556
 ] 

Gabor Bota edited comment on HDFS-13121 at 5/28/18 11:46 AM:
-

+1 for the v2 patch, failed tests on yetus are unrelated. Thanks for providing 
unit test for it!

[~xiegang112], I think we should put in the effort to test these changes. If we 
don't provide unit tests for these failures, we could have some other change in 
the future which could introduce the same issue again or if we only provide a 
description how to check the failure manually, testing will be way less 
reliable.


was (Author: gabor.bota):
+1 for the v2 patch, failed tests on yetus are unrelated.

[~xiegang112], I think we should put in the effort to test these changes. If we 
don't provide unit tests for these failures, we could have some other change in 
the future which could introduce the same issue again or if we only provide a 
description how to check the failure manually, testing will be way less 
reliable.

> NPE when request file descriptors when SC read
> --
>
> Key: HDFS-13121
> URL: https://issues.apache.org/jira/browse/HDFS-13121
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Gang Xie
>Assignee: Zsolt Venczel
>Priority: Minor
> Attachments: HDFS-13121.01.patch, HDFS-13121.02.patch
>
>
> Recently, we hit an issue that the DFSClient throws NPE. The case is that, 
> the app process exceeds the limit of the max open file. In the case, the 
> libhadoop never throw and exception but return null to the request of fds. 
> But requestFileDescriptors use the returned fds directly without any check 
> and then NPE. 
>  
> We need add a sanity check here of null pointer.
>  
> private ShortCircuitReplicaInfo requestFileDescriptors(DomainPeer peer,
>  Slot slot) throws IOException {
>  ShortCircuitCache cache = clientContext.getShortCircuitCache();
>  final DataOutputStream out =
>  new DataOutputStream(new BufferedOutputStream(peer.getOutputStream()));
>  SlotId slotId = slot == null ? null : slot.getSlotId();
>  new Sender(out).requestShortCircuitFds(block, token, slotId, 1,
>  failureInjector.getSupportsReceiptVerification());
>  DataInputStream in = new DataInputStream(peer.getInputStream());
>  BlockOpResponseProto resp = BlockOpResponseProto.parseFrom(
>  PBHelperClient.vintPrefixed(in));
>  DomainSocket sock = peer.getDomainSocket();
>  failureInjector.injectRequestFileDescriptorsFailure();
>  switch (resp.getStatus()) {
>  case SUCCESS:
>  byte buf[] = new byte[1];
>  FileInputStream[] fis = new FileInputStream[2];
>  {color:#d04437}sock.recvFileInputStreams(fis, buf, 0, buf.length);{color}
>  ShortCircuitReplica replica = null;
>  try {
>  ExtendedBlockId key =
>  new ExtendedBlockId(block.getBlockId(), block.getBlockPoolId());
>  if (buf[0] == USE_RECEIPT_VERIFICATION.getNumber()) {
>  LOG.trace("Sending receipt verification byte for slot {}", slot);
>  sock.getOutputStream().write(0);
>  }
>  {color:#d04437}replica = new ShortCircuitReplica(key, fis[0], fis[1], 
> cache,{color}
> {color:#d04437} Time.monotonicNow(), slot);{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13628) Update Archival Storage doc for Provided Storage

2018-05-28 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492573#comment-16492573
 ] 

Takanobu Asanuma commented on HDFS-13628:
-

Thanks for reviewing and committing it, [~ajisakaa]!

> Update Archival Storage doc for Provided Storage
> 
>
> Key: HDFS-13628
> URL: https://issues.apache.org/jira/browse/HDFS-13628
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.1.0
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-15491.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13121) NPE when request file descriptors when SC read

2018-05-28 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492556#comment-16492556
 ] 

Gabor Bota commented on HDFS-13121:
---

+1 for the v2 patch, failed tests on yetus are unrelated.

[~xiegang112], I think we should put in the effort to test these changes. If we 
don't provide unit tests for these failures, we could have some other change in 
the future which could introduce the same issue again or if we only provide a 
description how to check the failure manually, testing will be way less 
reliable.

> NPE when request file descriptors when SC read
> --
>
> Key: HDFS-13121
> URL: https://issues.apache.org/jira/browse/HDFS-13121
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Gang Xie
>Assignee: Zsolt Venczel
>Priority: Minor
> Attachments: HDFS-13121.01.patch, HDFS-13121.02.patch
>
>
> Recently, we hit an issue that the DFSClient throws NPE. The case is that, 
> the app process exceeds the limit of the max open file. In the case, the 
> libhadoop never throw and exception but return null to the request of fds. 
> But requestFileDescriptors use the returned fds directly without any check 
> and then NPE. 
>  
> We need add a sanity check here of null pointer.
>  
> private ShortCircuitReplicaInfo requestFileDescriptors(DomainPeer peer,
>  Slot slot) throws IOException {
>  ShortCircuitCache cache = clientContext.getShortCircuitCache();
>  final DataOutputStream out =
>  new DataOutputStream(new BufferedOutputStream(peer.getOutputStream()));
>  SlotId slotId = slot == null ? null : slot.getSlotId();
>  new Sender(out).requestShortCircuitFds(block, token, slotId, 1,
>  failureInjector.getSupportsReceiptVerification());
>  DataInputStream in = new DataInputStream(peer.getInputStream());
>  BlockOpResponseProto resp = BlockOpResponseProto.parseFrom(
>  PBHelperClient.vintPrefixed(in));
>  DomainSocket sock = peer.getDomainSocket();
>  failureInjector.injectRequestFileDescriptorsFailure();
>  switch (resp.getStatus()) {
>  case SUCCESS:
>  byte buf[] = new byte[1];
>  FileInputStream[] fis = new FileInputStream[2];
>  {color:#d04437}sock.recvFileInputStreams(fis, buf, 0, buf.length);{color}
>  ShortCircuitReplica replica = null;
>  try {
>  ExtendedBlockId key =
>  new ExtendedBlockId(block.getBlockId(), block.getBlockPoolId());
>  if (buf[0] == USE_RECEIPT_VERIFICATION.getNumber()) {
>  LOG.trace("Sending receipt verification byte for slot {}", slot);
>  sock.getOutputStream().write(0);
>  }
>  {color:#d04437}replica = new ShortCircuitReplica(key, fis[0], fis[1], 
> cache,{color}
> {color:#d04437} Time.monotonicNow(), slot);{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13628) Update Archival Storage doc for Provided Storage

2018-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492508#comment-16492508
 ] 

Hudson commented on HDFS-13628:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14299 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14299/])
HDFS-13628. Update Archival Storage doc for Provided Storage (aajisaka: rev 
04757e5864bd4904fd5a59d143fff480814700e4)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ArchivalStorage.md


> Update Archival Storage doc for Provided Storage
> 
>
> Key: HDFS-13628
> URL: https://issues.apache.org/jira/browse/HDFS-13628
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.1.0
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-15491.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-126) Fix findbugs warning in MetadataKeyFilters.java

2018-05-28 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HDDS-126:
--

 Summary: Fix findbugs warning in MetadataKeyFilters.java
 Key: HDDS-126
 URL: https://issues.apache.org/jira/browse/HDDS-126
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Akira Ajisaka


{noformat}
module:hadoop-hdds/common 
   Found reliance on default encoding in 
org.apache.hadoop.utils.MetadataKeyFilters$KeyPrefixFilter.filterKey(byte[], 
byte[], byte[]):in 
org.apache.hadoop.utils.MetadataKeyFilters$KeyPrefixFilter.filterKey(byte[], 
byte[], byte[]): String.getBytes() At MetadataKeyFilters.java:[line 97] 
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13628) Update Archival Storage doc for Provided Storage

2018-05-28 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-13628:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.1
   3.2.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-3.1. Thanks [~tasanuma0829]!

> Update Archival Storage doc for Provided Storage
> 
>
> Key: HDFS-13628
> URL: https://issues.apache.org/jira/browse/HDFS-13628
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.1.0
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-15491.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13628) Update Archival Storage doc for Provided Storage

2018-05-28 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492484#comment-16492484
 ] 

Akira Ajisaka commented on HDFS-13628:
--

LGTM, +1

> Update Archival Storage doc for Provided Storage
> 
>
> Key: HDFS-13628
> URL: https://issues.apache.org/jira/browse/HDFS-13628
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.1.0
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15491.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13628) Update Archival Storage doc for Provided Storage

2018-05-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492453#comment-16492453
 ] 

genericqa commented on HDFS-13628:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
37m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13628 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924705/HADOOP-15491.1.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux e5c1d053a571 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 88cbe57 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 334 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24313/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update Archival Storage doc for Provided Storage
> 
>
> Key: HDFS-13628
> URL: https://issues.apache.org/jira/browse/HDFS-13628
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.1.0
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15491.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-108) Update Node2ContainerMap while processing container reports

2018-05-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492443#comment-16492443
 ] 

genericqa commented on HDDS-108:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-hdds/server-scm: The patch generated 2 
new + 9 unchanged - 2 fixed = 11 total (was 11) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
2s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
24s{color} | {color:red} The patch generated 10 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-108 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925364/HDDS-108.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 04e64538130a 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0cf6e87 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDDS-Build/199/artifact/out/branch-findbugs-hadoop-hdds_server-scm-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/199/artifact/out/diff-checkstyle-hadoop-hdds_server-scm.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/199/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDDS-Build/199/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 301 (vs. ulimit of 

[jira] [Moved] (HDFS-13628) Update Archival Storage doc for Provided Storage

2018-05-28 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka moved HADOOP-15491 to HDFS-13628:
---

Affects Version/s: (was: 3.1.0)
   3.1.0
 Target Version/s:   (was: 3.2.0, 3.1.1)
  Component/s: (was: documentation)
   documentation
  Key: HDFS-13628  (was: HADOOP-15491)
  Project: Hadoop HDFS  (was: Hadoop Common)

> Update Archival Storage doc for Provided Storage
> 
>
> Key: HDFS-13628
> URL: https://issues.apache.org/jira/browse/HDFS-13628
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.1.0
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15491.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-108) Update Node2ContainerMap while processing container reports

2018-05-28 Thread Shashikant Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492404#comment-16492404
 ] 

Shashikant Banerjee commented on HDDS-108:
--

 Patch v1 fixes the checkstyle, findBug and Unit test failures. It also adds a 
ignore tag to the ContainerSupervisor tests as ContainerSupervisor will be 
removed from SCM.

> Update Node2ContainerMap while processing container reports
> ---
>
> Key: HDDS-108
> URL: https://issues.apache.org/jira/browse/HDDS-108
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-108.00.patch, HDDS-108.01.patch
>
>
> When the container report comes, the Node2Container Map should be updated via 
> SCMContainerManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-108) Update Node2ContainerMap while processing container reports

2018-05-28 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-108:
-
Attachment: HDDS-108.01.patch

> Update Node2ContainerMap while processing container reports
> ---
>
> Key: HDDS-108
> URL: https://issues.apache.org/jira/browse/HDDS-108
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-108.00.patch, HDDS-108.01.patch
>
>
> When the container report comes, the Node2Container Map should be updated via 
> SCMContainerManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13627) TestErasureCodingExerciseAPIs fails on Windows

2018-05-28 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13627:
-
Description: 
All tests in 
[TestErasureCodingExerciseAPIs|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs/TestErasureCodingExerciseAPIs/]
 fails with error message:
{color:#d04437}No FileSystem for scheme "filetarget"{color}

This is caused by improper Path usage on Windows.

  was:
All tests in 
[TestErasureCodingExerciseAPIs|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs/TestErasureCodingExerciseAPIs/]
 fails with error message:
{color:#d04437}No FileSystem for scheme "filetarget"{color}

This is caused by Path usage on Windows.


> TestErasureCodingExerciseAPIs fails on Windows
> --
>
> Key: HDFS-13627
> URL: https://issues.apache.org/jira/browse/HDFS-13627
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13627.000.patch
>
>
> All tests in 
> [TestErasureCodingExerciseAPIs|https://builds.apache.org/job/hadoop-trunk-win/479/testReport/org.apache.hadoop.hdfs/TestErasureCodingExerciseAPIs/]
>  fails with error message:
> {color:#d04437}No FileSystem for scheme "filetarget"{color}
> This is caused by improper Path usage on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13132) Ozone: Handle datanode failures in Storage Container Manager

2018-05-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492351#comment-16492351
 ] 

genericqa commented on HDFS-13132:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-13132 does not apply to HDFS-7240. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13132 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12910155/HDFS-13132-HDFS-7240.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24312/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Handle datanode failures in Storage Container Manager
> 
>
> Key: HDFS-13132
> URL: https://issues.apache.org/jira/browse/HDFS-13132
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13132-HDFS-7240.001.patch, 
> HDFS-13132-HDFS-7240.002.patch
>
>
> Currently SCM receives heartbeat from the datanodes in the cluster receiving 
> container reports. Apart from this Ratis leader also receives the heartbeats 
> from the nodes in a Raft ring. The ratis heartbeats are at a smaller interval 
> (500 ms) whereas SCM heartbeats are at (30s), it is thereby considered safe 
> to assume that a datanode is really lost when SCM missed heartbeat from such 
> a node.
> The pipeline recovery will follow the following steps
> 1) As noted earlier, SCM will identify a dead DN via the heartbeats. Current 
> stale interval is (1.5m). Once a stale node has been identified, SCM will 
> find the list of containers for the pipelines the datanode was part of.
> 2) SCM sends close container command to the datanodes, note that at this 
> time, the Ratis ring has 2 nodes in the ring and consistency can still be 
> guaranteed by Ratis.
> 3) If another node dies before the close container command succeeded, then 
> ratis cannot guarantee consistency of the data being written/ close 
> container. The pipeline here will be marked in a inconsistent state.
> 4) Closed container will be replicated via the close container replication 
> protocol.
> If the dead datanode comes back, as part of the re-register command, SCM will 
> ask the Datanode to format all the open containers.
> 5) Return the healthy nodes back to the free node pool for the next pipeline 
> allocation
> 6) Read operation to close containers will succeed however read operation to 
> a open container on a single node cluster will be disallowed. It will only be 
> allowed under a special flag aka ReadInconsistentData flag.
> This jira will introduce the mechanism to identify and handle datanode 
> failure.
> However handling of a) 2 nodes simultaneously and b) Return the nodes to 
> healthy state c) allow inconsistent data reads and d) purging of open 
> container on a zombie node will be done as part of separate bugs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13132) Ozone: Handle datanode failures in Storage Container Manager

2018-05-28 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh reassigned HDFS-13132:


Assignee: Shashikant Banerjee  (was: Mukul Kumar Singh)

> Ozone: Handle datanode failures in Storage Container Manager
> 
>
> Key: HDFS-13132
> URL: https://issues.apache.org/jira/browse/HDFS-13132
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13132-HDFS-7240.001.patch, 
> HDFS-13132-HDFS-7240.002.patch
>
>
> Currently SCM receives heartbeat from the datanodes in the cluster receiving 
> container reports. Apart from this Ratis leader also receives the heartbeats 
> from the nodes in a Raft ring. The ratis heartbeats are at a smaller interval 
> (500 ms) whereas SCM heartbeats are at (30s), it is thereby considered safe 
> to assume that a datanode is really lost when SCM missed heartbeat from such 
> a node.
> The pipeline recovery will follow the following steps
> 1) As noted earlier, SCM will identify a dead DN via the heartbeats. Current 
> stale interval is (1.5m). Once a stale node has been identified, SCM will 
> find the list of containers for the pipelines the datanode was part of.
> 2) SCM sends close container command to the datanodes, note that at this 
> time, the Ratis ring has 2 nodes in the ring and consistency can still be 
> guaranteed by Ratis.
> 3) If another node dies before the close container command succeeded, then 
> ratis cannot guarantee consistency of the data being written/ close 
> container. The pipeline here will be marked in a inconsistent state.
> 4) Closed container will be replicated via the close container replication 
> protocol.
> If the dead datanode comes back, as part of the re-register command, SCM will 
> ask the Datanode to format all the open containers.
> 5) Return the healthy nodes back to the free node pool for the next pipeline 
> allocation
> 6) Read operation to close containers will succeed however read operation to 
> a open container on a single node cluster will be disallowed. It will only be 
> allowed under a special flag aka ReadInconsistentData flag.
> This jira will introduce the mechanism to identify and handle datanode 
> failure.
> However handling of a) 2 nodes simultaneously and b) Return the nodes to 
> healthy state c) allow inconsistent data reads and d) purging of open 
> container on a zombie node will be done as part of separate bugs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13627) TestErasureCodingExerciseAPIs fails on Windows

2018-05-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492315#comment-16492315
 ] 

genericqa commented on HDFS-13627:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}156m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13627 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925340/HDFS-13627.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e1f07c5f5056 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0cf6e87 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24311/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24311/testReport/ |
| Max. process+thread count | 3051 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output |