[jira] [Updated] (HDDS-599) Fix TestOzoneConfiguration TestOzoneConfigurationFields

2018-10-08 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-599:

Fix Version/s: 0.3.0

> Fix TestOzoneConfiguration TestOzoneConfigurationFields
> ---
>
> Key: HDDS-599
> URL: https://issues.apache.org/jira/browse/HDDS-599
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
>
> java.lang.AssertionError: ozone-default.xml has 2 properties missing in class 
> org.apache.hadoop.ozone.OzoneConfigKeys class 
> org.apache.hadoop.hdds.scm.ScmConfigKeys class 
> org.apache.hadoop.ozone.om.OMConfigKeys class 
> org.apache.hadoop.hdds.HddsConfigKeys class 
> org.apache.hadoop.ozone.s3.S3GatewayConfigKeys Entries: 
> hdds.lock.suppress.warning.interval.ms hdds.write.lock.reporting.threshold.ms 
> expected:<0> but was:<2>
>  
> hdds.lock.suppress.warning.interval.ms and 
> hdds.write.lock.reporting.threshold.ms should be removed from 
> ozone-default.xml 
> This is caused by HDDS-354, which has missed removing these properties from 
> ozone-default.xml
>  
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-599) Fix TestOzoneConfiguration TestOzoneConfigurationFields

2018-10-08 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-599:

Target Version/s: 0.3.0
   Fix Version/s: (was: 0.3.0)

> Fix TestOzoneConfiguration TestOzoneConfigurationFields
> ---
>
> Key: HDDS-599
> URL: https://issues.apache.org/jira/browse/HDDS-599
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
>
> java.lang.AssertionError: ozone-default.xml has 2 properties missing in class 
> org.apache.hadoop.ozone.OzoneConfigKeys class 
> org.apache.hadoop.hdds.scm.ScmConfigKeys class 
> org.apache.hadoop.ozone.om.OMConfigKeys class 
> org.apache.hadoop.hdds.HddsConfigKeys class 
> org.apache.hadoop.ozone.s3.S3GatewayConfigKeys Entries: 
> hdds.lock.suppress.warning.interval.ms hdds.write.lock.reporting.threshold.ms 
> expected:<0> but was:<2>
>  
> hdds.lock.suppress.warning.interval.ms and 
> hdds.write.lock.reporting.threshold.ms should be removed from 
> ozone-default.xml 
> This is caused by HDDS-354, which has missed removing these properties from 
> ozone-default.xml
>  
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-599) Fix TestOzoneConfiguration TestOzoneConfigurationFields

2018-10-08 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-599:
---

 Summary: Fix TestOzoneConfiguration TestOzoneConfigurationFields
 Key: HDDS-599
 URL: https://issues.apache.org/jira/browse/HDDS-599
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham


java.lang.AssertionError: ozone-default.xml has 2 properties missing in class 
org.apache.hadoop.ozone.OzoneConfigKeys class 
org.apache.hadoop.hdds.scm.ScmConfigKeys class 
org.apache.hadoop.ozone.om.OMConfigKeys class 
org.apache.hadoop.hdds.HddsConfigKeys class 
org.apache.hadoop.ozone.s3.S3GatewayConfigKeys Entries: 
hdds.lock.suppress.warning.interval.ms hdds.write.lock.reporting.threshold.ms 
expected:<0> but was:<2>

 

hdds.lock.suppress.warning.interval.ms and 
hdds.write.lock.reporting.threshold.ms should be removed from ozone-default.xml 

This is caused by HDDS-354, which has missed removing these properties from 
ozone-default.xml

 

 

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-13942) [JDK10] Fix javadoc errors in hadoop-hdfs module

2018-10-08 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13942 started by Dinesh Chitlangia.

> [JDK10] Fix javadoc errors in hadoop-hdfs module
> 
>
> Key: HDFS-13942
> URL: https://issues.apache.org/jira/browse/HDFS-13942
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> There are 212 errors in hadoop-hdfs module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-568) ozone sh volume info, update, delete operations fail when volume name is not prefixed by /

2018-10-08 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642815#comment-16642815
 ] 

Dinesh Chitlangia commented on HDDS-568:


Failures are unrelated to the patch

> ozone sh volume info, update, delete operations fail when volume name is not 
> prefixed by /
> --
>
> Key: HDDS-568
> URL: https://issues.apache.org/jira/browse/HDDS-568
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.2.1
>Reporter: Soumitra Sulav
>Assignee: Dinesh Chitlangia
>Priority: Blocker
> Attachments: HDDS-568.001.patch, HDDS-568.002.patch
>
>
> Ozone filesystem volume isn't getting deleted even though the underlying 
> bucket is deleted and is currently empty.
> ozone sh command throws an error : VOLUME_NOT_FOUND even though its there
> On trying to create again it says : error:VOLUME_ALREADY_EXISTS (as expected).
> {code:java}
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh bucket list fstestvol
> [ ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume delete fstestvol
> Delete Volume failed, error:VOLUME_NOT_FOUND
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume list
> [ {
>   "owner" : {
> "name" : "root"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 1048576
>   },
>   "volumeName" : "fstestvol",
>   "createdOn" : "Fri, 21 Sep 2018 11:19:23 GMT",
>   "createdBy" : "root"
> } ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume create fstestvol 
> -u=hdfs
> 2018-10-03 10:14:49,151 [main] INFO - Creating Volume: fstestvol, with hdfs 
> as owner and quota set to 1152921504606846976 bytes.
> Volume creation failed, error:VOLUME_ALREADY_EXISTS
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-577) Support S3 buckets as first class objects in Ozone Manager - 2

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642811#comment-16642811
 ] 

Hadoop QA commented on HDDS-577:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 58s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests 

[jira] [Commented] (HDDS-478) Log files related to each daemon doesn't have proper startup and shutdown logs

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642800#comment-16642800
 ] 

Hadoop QA commented on HDDS-478:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
39m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-478 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942950/HDDS-478.001.patch |
| Optional Tests |  asflicense  mvnsite  unit  |
| uname | Linux 0cd99d5c8f46 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a30b1d1 |
| maven | version: Apache Maven 3.3.9 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1310/testReport/ |
| Max. process+thread count | 307 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/common U: hadoop-ozone/common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1310/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Log files related to each daemon doesn't have proper startup and shutdown logs
> --
>
> Key: HDDS-478
> URL: https://issues.apache.org/jira/browse/HDDS-478
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.2.1
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-478.001.patch
>
>
> All the logs (startup/shutdown messages) go into ozone.log. We have a 
> separate log file for each daemon and that log file doesn't contain these 
> logs. 
> {noformat}
> [root@ctr-e138-1518143905142-468367-01-02 logs]# cat ozone.log.2018-09-16 
> | head -20
> 2018-09-16 05:29:59,638 [main] INFO (LogAdapter.java:51) - STARTUP_MSG:
> /
> STARTUP_MSG: Starting OzoneManager
> STARTUP_MSG: host = 
> ctr-e138-1518143905142-468367-01-02.hwx.site/172.27.68.129
> STARTUP_MSG: args = [-createObjectStore]
> STARTUP_MSG: version = 3.2.0-SNAPSHOT
> STARTUP_MSG: classpath = 
> 

[jira] [Commented] (HDFS-13878) HttpFS: Implement GETSNAPSHOTTABLEDIRECTORYLIST

2018-10-08 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642796#comment-16642796
 ] 

Lokesh Jain commented on HDFS-13878:


[~smeng] Thanks for working on this! I am sorry for late review. The patch 
looks good to me. Can you please use TestHttpFSServer#sendRequestToHttpFSServer 
which was added earlier in 
TestHttpFSServer#verifyGetSnapshottableDirectoryList. Other than that I am +1 
on the patch.

> HttpFS: Implement GETSNAPSHOTTABLEDIRECTORYLIST
> ---
>
> Key: HDFS-13878
> URL: https://issues.apache.org/jira/browse/HDFS-13878
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13878.001.patch, HDFS-13878.002.patch, 
> HDFS-13878.003.patch
>
>
> Implement GETSNAPSHOTTABLEDIRECTORYLIST  (from HDFS-13141) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-568) ozone sh volume info, update, delete operations fail when volume name is not prefixed by /

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642795#comment-16642795
 ] 

Hadoop QA commented on HDDS-568:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
54s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 40s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.hdds.scm.container.TestContainerStateManagerIntegration |
|   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
|   | hadoop.ozone.client.rest.TestOzoneRestClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-568 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942944/HDDS-568.002.patch |
| Optional 

[jira] [Updated] (HDDS-568) ozone sh volume info, update, delete operations fail when volume name is not prefixed by /

2018-10-08 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-568:
---
Summary: ozone sh volume info, update, delete operations fail when volume 
name is not prefixed by /  (was: Ozone sh unable to delete volume)

> ozone sh volume info, update, delete operations fail when volume name is not 
> prefixed by /
> --
>
> Key: HDDS-568
> URL: https://issues.apache.org/jira/browse/HDDS-568
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.2.1
>Reporter: Soumitra Sulav
>Assignee: Dinesh Chitlangia
>Priority: Blocker
> Attachments: HDDS-568.001.patch, HDDS-568.002.patch
>
>
> Ozone filesystem volume isn't getting deleted even though the underlying 
> bucket is deleted and is currently empty.
> ozone sh command throws an error : VOLUME_NOT_FOUND even though its there
> On trying to create again it says : error:VOLUME_ALREADY_EXISTS (as expected).
> {code:java}
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh bucket list fstestvol
> [ ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume delete fstestvol
> Delete Volume failed, error:VOLUME_NOT_FOUND
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume list
> [ {
>   "owner" : {
> "name" : "root"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 1048576
>   },
>   "volumeName" : "fstestvol",
>   "createdOn" : "Fri, 21 Sep 2018 11:19:23 GMT",
>   "createdBy" : "root"
> } ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume create fstestvol 
> -u=hdfs
> 2018-10-03 10:14:49,151 [main] INFO - Creating Volume: fstestvol, with hdfs 
> as owner and quota set to 1152921504606846976 bytes.
> Volume creation failed, error:VOLUME_ALREADY_EXISTS
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-598) Volume Delete operation fails if volume name is not prefixed with /

2018-10-08 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-598:
---
Summary: Volume Delete operation fails if volume name is not prefixed with 
/  (was: Volume Delete operation fails if volume name is not suffixed with /)

> Volume Delete operation fails if volume name is not prefixed with /
> ---
>
> Key: HDDS-598
> URL: https://issues.apache.org/jira/browse/HDDS-598
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Priority: Blocker
>  Labels: newbie
>
> If we try to delete a volume without specifying the leading '/', then the 
> delete operation fails
> {code:java}
> $ bin/ozone sh volume create vol1 -u xyz
> $ bin/ozone sh volume delete vol1
> Delete Volume failed, error:VOLUME_NOT_FOUND
> $ bin/ozone sh volume delete /vol1
> Volume vol1 is deleted{code}
> In {{DeleteVolumeHandler.java}}, the first character in volume name is 
> skipped.
> {code:java}
> // we need to skip the slash in the URI path
> String volumeName = ozoneURI.getPath().substring(1);{code}
> We should only skip the leading /'s while interpreting the volume name. 
> Similar to how we interpret volume name in CreateVolumeHandler.
> {code:java}
> String volumeName = ozoneURI.getPath().replaceAll("^/+", "");{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13926) ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads

2018-10-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642765#comment-16642765
 ] 

Hudson commented on HDFS-13926:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15143 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15143/])
HDFS-13926. ThreadLocal aggregations for FileSystem.Statistics are (xiao: rev 
08bb6c49a5aec32b7d9f29238560f947420405d6)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/IOUtilsClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/StripedReconstructor.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ReaderStrategy.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripeReader.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/StripedReader.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystemWithECFile.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/StripedBlockReader.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java


> ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped 
> reads
> ---
>
> Key: HDFS-13926
> URL: https://issues.apache.org/jira/browse/HDFS-13926
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Hrishikesh Gadre
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13926-002.patch, HDFS-13926-003.patch, 
> HDFS-13926.01.patch, HDFS-13926.prelim.patch
>
>
> During some integration testing, [~nsheth] found out that per-thread read 
> stats for EC is incorrect. This is due to the striped reads are done 
> asynchronously on the worker threads.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13926) ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads

2018-10-08 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13926:
-
Fix Version/s: 3.2.0

> ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped 
> reads
> ---
>
> Key: HDFS-13926
> URL: https://issues.apache.org/jira/browse/HDFS-13926
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Hrishikesh Gadre
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13926-002.patch, HDFS-13926-003.patch, 
> HDFS-13926.01.patch, HDFS-13926.prelim.patch
>
>
> During some integration testing, [~nsheth] found out that per-thread read 
> stats for EC is incorrect. This is due to the striped reads are done 
> asynchronously on the worker threads.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13926) ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads

2018-10-08 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642754#comment-16642754
 ] 

Xiao Chen commented on HDFS-13926:
--

Committed this to trunk and branch-3.2. Thanks again!

There are some conflicts with earlier branches due to HADOOP-15507 is only in 
3.2+. I think we should fix this issue in 3.0+ given even the normal read stats 
doesn't work with EC. [~hgadre] feel free to provide a patch if you're 
interested.

> ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped 
> reads
> ---
>
> Key: HDFS-13926
> URL: https://issues.apache.org/jira/browse/HDFS-13926
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Hrishikesh Gadre
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13926-002.patch, HDFS-13926-003.patch, 
> HDFS-13926.01.patch, HDFS-13926.prelim.patch
>
>
> During some integration testing, [~nsheth] found out that per-thread read 
> stats for EC is incorrect. This is due to the striped reads are done 
> asynchronously on the worker threads.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13926) ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads

2018-10-08 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13926:
-
Target Version/s: 3.0.4  (was: 3.2.0)

> ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped 
> reads
> ---
>
> Key: HDFS-13926
> URL: https://issues.apache.org/jira/browse/HDFS-13926
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Hrishikesh Gadre
>Priority: Major
> Attachments: HDFS-13926-002.patch, HDFS-13926-003.patch, 
> HDFS-13926.01.patch, HDFS-13926.prelim.patch
>
>
> During some integration testing, [~nsheth] found out that per-thread read 
> stats for EC is incorrect. This is due to the striped reads are done 
> asynchronously on the worker threads.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13926) ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads

2018-10-08 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13926:
-
Target Version/s: 3.2.0  (was: 3.0.4)

> ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped 
> reads
> ---
>
> Key: HDFS-13926
> URL: https://issues.apache.org/jira/browse/HDFS-13926
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Hrishikesh Gadre
>Priority: Major
> Attachments: HDFS-13926-002.patch, HDFS-13926-003.patch, 
> HDFS-13926.01.patch, HDFS-13926.prelim.patch
>
>
> During some integration testing, [~nsheth] found out that per-thread read 
> stats for EC is incorrect. This is due to the striped reads are done 
> asynchronously on the worker threads.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13882) Set a maximum for the delay before retrying locateFollowingBlock

2018-10-08 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642747#comment-16642747
 ] 

Xiao Chen commented on HDFS-13882:
--

Thanks for the latest rev Kitti. Jenkins links are gone now, but I verified 
locally the 2 failing tests pass. 

+1 on patch 5. If [~arpitagarwal] and others doesn't have additional comments, 
I'll commit this by Tuesday.

> Set a maximum for the delay before retrying locateFollowingBlock
> 
>
> Key: HDFS-13882
> URL: https://issues.apache.org/jira/browse/HDFS-13882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13882.001.patch, HDFS-13882.002.patch, 
> HDFS-13882.003.patch, HDFS-13882.004.patch, HDFS-13882.005.patch
>
>
> More and more we are seeing cases where customers are running into the java 
> io exception "Unable to close file because the last block does not have 
> enough number of replicas" on client file closure. The common workaround is 
> to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13962) Add null check for add-replica pool to avoid lock acquiring

2018-10-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642726#comment-16642726
 ] 

Hudson commented on HDFS-13962:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15142 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15142/])
HDFS-13962. Add null check for add-replica pool to avoid lock acquiring. 
(yqlin: rev 1043795f7fe44c98a34f8ea3cea708c801c3043b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java


> Add null check for add-replica pool to avoid lock acquiring
> ---
>
> Key: HDFS-13962
> URL: https://issues.apache.org/jira/browse/HDFS-13962
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Yiqun Lin
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Fix For: 3.2.0, 3.1.2, 3.3.0
>
> Attachments: HDFS-13962.01.patch
>
>
> This is a follow-up work for HDFS-13768. Mainly two places needed to update:
>  * Add null check for add-replica pool to avoid lock acquiring
>  * In {{ReplicaMap#addAndGet}}, it would be better to use 
> {{FoldedTreeSet#addOrReplace}} instead of {{FoldedTreeSet#add}} for adding 
> replica info. This is for the logic consistentency with add operation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-478) Log files related to each daemon doesn't have proper startup and shutdown logs

2018-10-08 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-478:
---
Attachment: HDDS-478.001.patch
Status: Patch Available  (was: Open)

[~hanishakoneru] - Good catch about audit logs!

[~anu] - The default audit log config had console appender configured and thus 
we were able to see audit logs in .out file.

I have commented that section so that people can uncomment it when they really 
need else by default audit logs will only go to the audit.log file.

 

I have verified this by using tar and running ozone locally.

 

Thank you.

> Log files related to each daemon doesn't have proper startup and shutdown logs
> --
>
> Key: HDDS-478
> URL: https://issues.apache.org/jira/browse/HDDS-478
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.2.1
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-478.001.patch
>
>
> All the logs (startup/shutdown messages) go into ozone.log. We have a 
> separate log file for each daemon and that log file doesn't contain these 
> logs. 
> {noformat}
> [root@ctr-e138-1518143905142-468367-01-02 logs]# cat ozone.log.2018-09-16 
> | head -20
> 2018-09-16 05:29:59,638 [main] INFO (LogAdapter.java:51) - STARTUP_MSG:
> /
> STARTUP_MSG: Starting OzoneManager
> STARTUP_MSG: host = 
> ctr-e138-1518143905142-468367-01-02.hwx.site/172.27.68.129
> STARTUP_MSG: args = [-createObjectStore]
> STARTUP_MSG: version = 3.2.0-SNAPSHOT
> STARTUP_MSG: classpath = 
> 

[jira] [Updated] (HDFS-13962) Add null check for add-replica pool to avoid lock acquiring

2018-10-08 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13962:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   3.1.2
   3.2.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-3.2 and branch-3.1.
Thanks [~surendrasingh] for the contribution.

> Add null check for add-replica pool to avoid lock acquiring
> ---
>
> Key: HDFS-13962
> URL: https://issues.apache.org/jira/browse/HDFS-13962
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Yiqun Lin
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Fix For: 3.2.0, 3.1.2, 3.3.0
>
> Attachments: HDFS-13962.01.patch
>
>
> This is a follow-up work for HDFS-13768. Mainly two places needed to update:
>  * Add null check for add-replica pool to avoid lock acquiring
>  * In {{ReplicaMap#addAndGet}}, it would be better to use 
> {{FoldedTreeSet#addOrReplace}} instead of {{FoldedTreeSet#add}} for adding 
> replica info. This is for the logic consistentency with add operation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13926) ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642713#comment-16642713
 ] 

Hadoop QA commented on HDFS-13926:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 82 
unchanged - 2 fixed = 82 total (was 84) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
40s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
53s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}176m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.fs.TestHdfsNativeCodeLoader |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13926 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942926/HDFS-13926-003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 05c773afbdb5 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | 

[jira] [Updated] (HDFS-13970) CacheManager Directives Map

2018-10-08 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-13970:
---
Status: Patch Available  (was: Open)

Added new patch to fix check-style errors

> CacheManager Directives Map
> ---
>
> Key: HDFS-13970
> URL: https://issues.apache.org/jira/browse/HDFS-13970
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching, hdfs
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13970.1.patch, HDFS-13970.2.patch
>
>
> # Use Guava Multimap to simplify code
>  ## Currently, code uses a mix of LinkedList and ArrayList - just pick one
>  ## Currently, {{directivesByPath}} structure is sorted but never used in a 
> sorted way, it only performs remove and add operations, no iteration - use a 
> {{Set}} instead of a {{List}} for values to support faster remove operation.  
> Use a {{HashSet}} instead of a {{TreeSet}} for keys since it doesn't appear 
> that order really matters.
>  # The {{CacheDirective}} class needs a better hashcode implementation since 
> it will be used in a Set.  Do not instantiate a {{HashBuilder}} object every 
> time {{hashcode}} is called. Ouch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13963) NN UI is broken with IE11

2018-10-08 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642711#comment-16642711
 ] 

Ayush Saxena commented on HDFS-13963:
-

Thanks [~daisuke.kobayashi] for putting this up.
Have uploaded the patch with the change.
Pls Review!!!

> NN UI is broken with IE11
> -
>
> Key: HDFS-13963
> URL: https://issues.apache.org/jira/browse/HDFS-13963
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, ui
>Affects Versions: 3.1.1
>Reporter: Daisuke Kobayashi
>Assignee: Ayush Saxena
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-13963-01.patch, Screen Shot 2018-10-05 at 
> 20.22.20.png
>
>
> Internet Explorer 11 cannot correctly display Namenode Web UI while the NN 
> itself starts successfully. I have confirmed this over 3.1.1 (latest release) 
> and 3.3.0-SNAPSHOT (current trunk) that the following message is shown.
> {code}
> Failed to retrieve data from /jmx?qry=java.lang:type=Memory, cause: 
> SyntaxError: Invalid character
> {code}
> Apparently, this is because {{dfshealth.html}} runs as IE9 mode by default.
> {code}
> 
> {code}
> Once the compatible mode is changed to IE11 through developer tool, it's 
> rendered correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13970) CacheManager Directives Map

2018-10-08 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-13970:
---
Attachment: HDFS-13970.2.patch

> CacheManager Directives Map
> ---
>
> Key: HDFS-13970
> URL: https://issues.apache.org/jira/browse/HDFS-13970
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching, hdfs
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13970.1.patch, HDFS-13970.2.patch
>
>
> # Use Guava Multimap to simplify code
>  ## Currently, code uses a mix of LinkedList and ArrayList - just pick one
>  ## Currently, {{directivesByPath}} structure is sorted but never used in a 
> sorted way, it only performs remove and add operations, no iteration - use a 
> {{Set}} instead of a {{List}} for values to support faster remove operation.  
> Use a {{HashSet}} instead of a {{TreeSet}} for keys since it doesn't appear 
> that order really matters.
>  # The {{CacheDirective}} class needs a better hashcode implementation since 
> it will be used in a Set.  Do not instantiate a {{HashBuilder}} object every 
> time {{hashcode}} is called. Ouch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13970) CacheManager Directives Map

2018-10-08 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-13970:
---
Status: Open  (was: Patch Available)

> CacheManager Directives Map
> ---
>
> Key: HDFS-13970
> URL: https://issues.apache.org/jira/browse/HDFS-13970
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching, hdfs
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13970.1.patch, HDFS-13970.2.patch
>
>
> # Use Guava Multimap to simplify code
>  ## Currently, code uses a mix of LinkedList and ArrayList - just pick one
>  ## Currently, {{directivesByPath}} structure is sorted but never used in a 
> sorted way, it only performs remove and add operations, no iteration - use a 
> {{Set}} instead of a {{List}} for values to support faster remove operation.  
> Use a {{HashSet}} instead of a {{TreeSet}} for keys since it doesn't appear 
> that order really matters.
>  # The {{CacheDirective}} class needs a better hashcode implementation since 
> it will be used in a Set.  Do not instantiate a {{HashBuilder}} object every 
> time {{hashcode}} is called. Ouch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13962) Add null check for add-replica pool to avoid lock acquiring

2018-10-08 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642708#comment-16642708
 ] 

Yiqun Lin commented on HDFS-13962:
--

+1. Commit this shortly.

> Add null check for add-replica pool to avoid lock acquiring
> ---
>
> Key: HDFS-13962
> URL: https://issues.apache.org/jira/browse/HDFS-13962
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Yiqun Lin
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-13962.01.patch
>
>
> This is a follow-up work for HDFS-13768. Mainly two places needed to update:
>  * Add null check for add-replica pool to avoid lock acquiring
>  * In {{ReplicaMap#addAndGet}}, it would be better to use 
> {{FoldedTreeSet#addOrReplace}} instead of {{FoldedTreeSet#add}} for adding 
> replica info. This is for the logic consistentency with add operation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13975) TestBalancer#testMaxIterationTime fails sporadically

2018-10-08 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642710#comment-16642710
 ] 

Ayush Saxena commented on HDFS-13975:
-

Thanks [~jlowe] for putting this up.

But in this test I guess it was previously realized that it can fail with this 
exception and was considered fine as of then.

If you see the code it is explicitly mentioned there.

 
{code:java}
 // accept runtime if it is under 3.5 seconds, as we need to wait for
  // IN_PROGRESS report from DN, and some spare to be able to finish.
  // NOTE: This can be a source of flaky tests, if the box is busy,
  // assertion here is based on the following: Balancer is already set
  // up, iteration gets the blocks from the NN, and makes the decision
  // to move 2 blocks. After that the PendingMoves are scheduled, and
  // DataNode heartbeats in for the Balancer every second, iteration is
  // two seconds long. This means that it will fail if the setup and the
  // heartbeat from the DataNode takes more than 500ms, as the iteration
  // should end at the 3rd second from start. As the number of
  // operations seems to be pretty low, and all comm happens locally, I
  // think the possibility of a failure due to node busyness is low.
  assertTrue("Unexpected iteration runtime: " + runtime + "ms > 3.5s",
  runtime < 3500);
{code}
The timeout is for the best case scenario.For Average cases it doesn't provide 
any margin.IIUC Increasing that 3.5s limit is the only way to get this test a 
little ahead.Since all other factors which lead to this timeout seems beyond 
control.But not sure that going that way can take away the logic for this 
assertion.

> TestBalancer#testMaxIterationTime fails sporadically
> 
>
> Key: HDFS-13975
> URL: https://issues.apache.org/jira/browse/HDFS-13975
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Jason Lowe
>Priority: Major
>
> A number of precommit builds have seen this test fail like this:
> {noformat}
> java.lang.AssertionError: Unexpected iteration runtime: 4021ms > 3.5s
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.testMaxIterationTime(TestBalancer.java:1649)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13926) ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads

2018-10-08 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642701#comment-16642701
 ] 

Xiao Chen commented on HDFS-13926:
--

Thanks Hrishikesh for pushing this through the finish line! +1 pending 
pre-commit.

> ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped 
> reads
> ---
>
> Key: HDFS-13926
> URL: https://issues.apache.org/jira/browse/HDFS-13926
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Hrishikesh Gadre
>Priority: Major
> Attachments: HDFS-13926-002.patch, HDFS-13926-003.patch, 
> HDFS-13926.01.patch, HDFS-13926.prelim.patch
>
>
> During some integration testing, [~nsheth] found out that per-thread read 
> stats for EC is incorrect. This is due to the striped reads are done 
> asynchronously on the worker threads.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13768) Adding replicas to volume map makes DataNode start slowly

2018-10-08 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13768:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.10.0
   Status: Resolved  (was: Patch Available)

Committed this to branch-2. Thanks [~surendrasingh] for the contribution.

>  Adding replicas to volume map makes DataNode start slowly 
> ---
>
> Key: HDFS-13768
> URL: https://issues.apache.org/jira/browse/HDFS-13768
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.2
>
> Attachments: HDFS-13768-branch-2.01.patch, 
> HDFS-13768-branch-2.02.patch, HDFS-13768-branch-2.03.patch, 
> HDFS-13768.01-branch-2.patch, HDFS-13768.01.patch, HDFS-13768.02.patch, 
> HDFS-13768.03.patch, HDFS-13768.04.patch, HDFS-13768.05.patch, 
> HDFS-13768.06.patch, HDFS-13768.07.patch, HDFS-13768.patch, screenshot-1.png
>
>
> We find DN starting so slowly when rolling upgrade our cluster. When we 
> restart DNs, the DNs start so slowly and not register to NN immediately. And 
> this cause a lots of following error:
> {noformat}
> DataXceiver error processing WRITE_BLOCK operation  src: /xx.xx.xx.xx:64360 
> dst: /xx.xx.xx.xx:50010
> java.io.IOException: Not ready to serve the block pool, 
> BP-1508644862-xx.xx.xx.xx-1493781183457.
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAndWaitForBP(DataXceiver.java:1290)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAccess(DataXceiver.java:1298)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:630)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Looking into the logic of DN startup, it will do the initial block pool 
> operation before the registration. And during initializing block pool 
> operation, we found the adding replicas to volume map is the most expensive 
> operation.  Related log:
> {noformat}
> 2018-07-26 10:46:23,771 INFO [Thread-105] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/1/dfs/dn/current: 242722ms
> 2018-07-26 10:46:26,231 INFO [Thread-109] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/5/dfs/dn/current: 245182ms
> 2018-07-26 10:46:32,146 INFO [Thread-112] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/8/dfs/dn/current: 251097ms
> 2018-07-26 10:47:08,283 INFO [Thread-106] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/2/dfs/dn/current: 287235ms
> {noformat}
> Currently DN uses independent thread to scan and add replica for each volume, 
> but we still need to wait the slowest thread to finish its work. So the main 
> problem here is that we could make the thread to run faster.
> The jstack we get when DN blocking in the adding replica:
> {noformat}
> "Thread-113" #419 daemon prio=5 os_prio=0 tid=0x7f40879ff000 nid=0x145da 
> runnable [0x7f4043a38000]
>java.lang.Thread.State: RUNNABLE
>   at java.io.UnixFileSystem.list(Native Method)
>   at java.io.File.list(File.java:1122)
>   at java.io.File.listFiles(File.java:1207)
>   at org.apache.hadoop.fs.FileUtil.listFiles(FileUtil.java:1165)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:445)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.getVolumeMap(BlockPoolSlice.java:342)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:864)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:191)

[jira] [Commented] (HDFS-13768) Adding replicas to volume map makes DataNode start slowly

2018-10-08 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642695#comment-16642695
 ] 

Yiqun Lin commented on HDFS-13768:
--

Thanks for sharing the results. +1 for the v03 patch for branch-2. Committing...

>  Adding replicas to volume map makes DataNode start slowly 
> ---
>
> Key: HDFS-13768
> URL: https://issues.apache.org/jira/browse/HDFS-13768
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Fix For: 3.2.0, 3.1.2
>
> Attachments: HDFS-13768-branch-2.01.patch, 
> HDFS-13768-branch-2.02.patch, HDFS-13768-branch-2.03.patch, 
> HDFS-13768.01-branch-2.patch, HDFS-13768.01.patch, HDFS-13768.02.patch, 
> HDFS-13768.03.patch, HDFS-13768.04.patch, HDFS-13768.05.patch, 
> HDFS-13768.06.patch, HDFS-13768.07.patch, HDFS-13768.patch, screenshot-1.png
>
>
> We find DN starting so slowly when rolling upgrade our cluster. When we 
> restart DNs, the DNs start so slowly and not register to NN immediately. And 
> this cause a lots of following error:
> {noformat}
> DataXceiver error processing WRITE_BLOCK operation  src: /xx.xx.xx.xx:64360 
> dst: /xx.xx.xx.xx:50010
> java.io.IOException: Not ready to serve the block pool, 
> BP-1508644862-xx.xx.xx.xx-1493781183457.
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAndWaitForBP(DataXceiver.java:1290)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAccess(DataXceiver.java:1298)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:630)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Looking into the logic of DN startup, it will do the initial block pool 
> operation before the registration. And during initializing block pool 
> operation, we found the adding replicas to volume map is the most expensive 
> operation.  Related log:
> {noformat}
> 2018-07-26 10:46:23,771 INFO [Thread-105] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/1/dfs/dn/current: 242722ms
> 2018-07-26 10:46:26,231 INFO [Thread-109] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/5/dfs/dn/current: 245182ms
> 2018-07-26 10:46:32,146 INFO [Thread-112] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/8/dfs/dn/current: 251097ms
> 2018-07-26 10:47:08,283 INFO [Thread-106] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/2/dfs/dn/current: 287235ms
> {noformat}
> Currently DN uses independent thread to scan and add replica for each volume, 
> but we still need to wait the slowest thread to finish its work. So the main 
> problem here is that we could make the thread to run faster.
> The jstack we get when DN blocking in the adding replica:
> {noformat}
> "Thread-113" #419 daemon prio=5 os_prio=0 tid=0x7f40879ff000 nid=0x145da 
> runnable [0x7f4043a38000]
>java.lang.Thread.State: RUNNABLE
>   at java.io.UnixFileSystem.list(Native Method)
>   at java.io.File.list(File.java:1122)
>   at java.io.File.listFiles(File.java:1207)
>   at org.apache.hadoop.fs.FileUtil.listFiles(FileUtil.java:1165)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:445)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.getVolumeMap(BlockPoolSlice.java:342)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:864)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:191)
> {noformat}
> One improvement maybe we can use ForkJoinPool to do this recursive task, 
> 

[jira] [Updated] (HDDS-568) Ozone sh unable to delete volume

2018-10-08 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-568:
---
Attachment: HDDS-568.002.patch
Status: Patch Available  (was: Open)

[~ljain] - Thank you for reviewing. Good catch about UpdateHandler!

I have added a method in Handler so it can be used by Info, Update, Delete 
handlers and reduced code duplicity.

Also, updated unit tests.

 

Patch 002 addresses these.

 

> Ozone sh unable to delete volume
> 
>
> Key: HDDS-568
> URL: https://issues.apache.org/jira/browse/HDDS-568
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.2.1
>Reporter: Soumitra Sulav
>Assignee: Dinesh Chitlangia
>Priority: Blocker
> Attachments: HDDS-568.001.patch, HDDS-568.002.patch
>
>
> Ozone filesystem volume isn't getting deleted even though the underlying 
> bucket is deleted and is currently empty.
> ozone sh command throws an error : VOLUME_NOT_FOUND even though its there
> On trying to create again it says : error:VOLUME_ALREADY_EXISTS (as expected).
> {code:java}
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh bucket list fstestvol
> [ ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume delete fstestvol
> Delete Volume failed, error:VOLUME_NOT_FOUND
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume list
> [ {
>   "owner" : {
> "name" : "root"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 1048576
>   },
>   "volumeName" : "fstestvol",
>   "createdOn" : "Fri, 21 Sep 2018 11:19:23 GMT",
>   "createdBy" : "root"
> } ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume create fstestvol 
> -u=hdfs
> 2018-10-03 10:14:49,151 [main] INFO - Creating Volume: fstestvol, with hdfs 
> as owner and quota set to 1152921504606846976 bytes.
> Volume creation failed, error:VOLUME_ALREADY_EXISTS
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-568) Ozone sh unable to delete volume

2018-10-08 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-568:
---
Status: Open  (was: Patch Available)

> Ozone sh unable to delete volume
> 
>
> Key: HDDS-568
> URL: https://issues.apache.org/jira/browse/HDDS-568
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.2.1
>Reporter: Soumitra Sulav
>Assignee: Dinesh Chitlangia
>Priority: Blocker
> Attachments: HDDS-568.001.patch
>
>
> Ozone filesystem volume isn't getting deleted even though the underlying 
> bucket is deleted and is currently empty.
> ozone sh command throws an error : VOLUME_NOT_FOUND even though its there
> On trying to create again it says : error:VOLUME_ALREADY_EXISTS (as expected).
> {code:java}
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh bucket list fstestvol
> [ ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume delete fstestvol
> Delete Volume failed, error:VOLUME_NOT_FOUND
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume list
> [ {
>   "owner" : {
> "name" : "root"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 1048576
>   },
>   "volumeName" : "fstestvol",
>   "createdOn" : "Fri, 21 Sep 2018 11:19:23 GMT",
>   "createdBy" : "root"
> } ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume create fstestvol 
> -u=hdfs
> 2018-10-03 10:14:49,151 [main] INFO - Creating Volume: fstestvol, with hdfs 
> as owner and quota set to 1152921504606846976 bytes.
> Volume creation failed, error:VOLUME_ALREADY_EXISTS
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-577) Support S3 buckets as first class objects in Ozone Manager - 2

2018-10-08 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642691#comment-16642691
 ] 

Anu Engineer commented on HDDS-577:
---

{quote} do we needgetOzoneBucketName, as this is the same name as S3Bucketname 
only right.
{quote}
Right now, it is same, but have that API allows us to plugin different kinds of 
policies in the future. If all our clients use that API it makes it easy for us 
to make changes.

 

> Support S3 buckets as first class objects in Ozone Manager - 2
> --
>
> Key: HDDS-577
> URL: https://issues.apache.org/jira/browse/HDDS-577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-577.001.patch, HDDS-577.02.patch
>
>
> This patch is a continuation of HDDS-572. The earlier patch created S3 API 
> support for Ozone Manager, this patch exposes that API to the RPC client. In 
> the next few patches we will add support for S3Gateway and MiniOzone based 
> testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-577) Support S3 buckets as first class objects in Ozone Manager - 2

2018-10-08 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642677#comment-16642677
 ] 

Bharat Viswanadham edited comment on HDDS-577 at 10/9/18 1:29 AM:
--

Hi [~anu]

Thanks for the patch.

I have a question, do we needgetOzoneBucketName, as this is the same name as 
S3Bucketname only right. As in S3Table we store s3+<>/bucketname. (Having 
getOzoneVolumeName looks needed, as this will return the volume name which we 
construct through mapping)

So do we need these API's or can we remove this.

LGTM, few things are missing in the patch.

 

*Added following changes in patch v02:*
 # Status code mapping from OzoneManagerProtocolClientSideTranslatorPB is 
missing added them.
 # These new methods needs to be added to ObjectStore, as we use ozoneclient in 
S3Gateway API requests.
 # Added test cases for new S3 API's.

 

 

[~elek]

Let me know your thoughts on the v02 patch.


was (Author: bharatviswa):
Hi [~anu]

Thanks for the patch.

I have a question, do we needgetOzoneBucketName, as this is the same name as 
S3Bucketname only right. As in S3Table we store s3+<>/bucketname. (Having 
getOzoneVolumeName looks needed, as this will return the volume name which we 
construct through mapping)

So do we need these API's or can we remove this.

LGTM, few things are missing in the patch.
 # Status code mapping from OzoneManagerProtocolClientSideTranslatorPB is 
missing added them.
 # These new methods needs to be added to ObjectStore, as we use ozoneclient in 
S3Gateway API requests.
 # Added test cases for new S3 API's.

 

 

[~elek]

Let me know your thoughts on the v02 patch.

> Support S3 buckets as first class objects in Ozone Manager - 2
> --
>
> Key: HDDS-577
> URL: https://issues.apache.org/jira/browse/HDDS-577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-577.001.patch, HDDS-577.02.patch
>
>
> This patch is a continuation of HDDS-572. The earlier patch created S3 API 
> support for Ozone Manager, this patch exposes that API to the RPC client. In 
> the next few patches we will add support for S3Gateway and MiniOzone based 
> testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-577) Support S3 buckets as first class objects in Ozone Manager - 2

2018-10-08 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642677#comment-16642677
 ] 

Bharat Viswanadham commented on HDDS-577:
-

Hi [~anu]

Thanks for the patch.

I have a question, do we needgetOzoneBucketName, as this is the same name as 
S3Bucketname only right. As in S3Table we store s3+<>/bucketname. (Having 
getOzoneVolumeName looks needed, as this will return the volume name which we 
construct through mapping)

So do we need these API's or can we remove this.

LGTM, few things are missing in the patch.
 # Status code mapping from OzoneManagerProtocolClientSideTranslatorPB is 
missing added them.
 # These new methods needs to be added to ObjectStore, as we use ozoneclient in 
S3Gateway API requests.
 # Added test cases for new S3 API's.

 

 

[~elek]

Let me know your thoughts on the v02 patch.

> Support S3 buckets as first class objects in Ozone Manager - 2
> --
>
> Key: HDDS-577
> URL: https://issues.apache.org/jira/browse/HDDS-577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-577.001.patch, HDDS-577.02.patch
>
>
> This patch is a continuation of HDDS-572. The earlier patch created S3 API 
> support for Ozone Manager, this patch exposes that API to the RPC client. In 
> the next few patches we will add support for S3Gateway and MiniOzone based 
> testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-577) Support S3 buckets as first class objects in Ozone Manager - 2

2018-10-08 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-577:

Attachment: HDDS-577.02.patch

> Support S3 buckets as first class objects in Ozone Manager - 2
> --
>
> Key: HDDS-577
> URL: https://issues.apache.org/jira/browse/HDDS-577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-577.001.patch, HDDS-577.02.patch
>
>
> This patch is a continuation of HDDS-572. The earlier patch created S3 API 
> support for Ozone Manager, this patch exposes that API to the RPC client. In 
> the next few patches we will add support for S3Gateway and MiniOzone based 
> testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-590) Add unit test for HDDS-583

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642668#comment-16642668
 ] 

Hadoop QA commented on HDDS-590:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 57s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.hdds.scm.container.TestContainerStateManagerIntegration |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-590 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942930/HDDS-590..002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 17ceffe21397 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 347ea38 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| unit | 

[jira] [Assigned] (HDDS-539) ozone datanode ignores the invalid options

2018-10-08 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDDS-539:
--

Assignee: Vinicius Higa Murakami

> ozone datanode ignores the invalid options
> --
>
> Key: HDDS-539
> URL: https://issues.apache.org/jira/browse/HDDS-539
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Vinicius Higa Murakami
>Priority: Major
>  Labels: newbie
>
> ozone datanode command starts datanode and ignores the invalid option, apart 
> from help
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone datanode -help
> Starts HDDS Datanode
> {code}
> For all the other invalid options, it just ignores and starts the DN like 
> below:
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone datanode -ABC
> 2018-09-22 00:59:34,462 [main] INFO - STARTUP_MSG:
> /
> STARTUP_MSG: Starting HddsDatanodeService
> STARTUP_MSG: host = 
> ctr-e138-1518143905142-481027-01-02.hwx.site/172.27.54.20
> STARTUP_MSG: args = [-ABC]
> STARTUP_MSG: version = 3.2.0-SNAPSHOT
> STARTUP_MSG: classpath = 
> 

[jira] [Commented] (HDDS-572) Support S3 buckets as first class objects in Ozone Manager - 1

2018-10-08 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642638#comment-16642638
 ] 

Anu Engineer commented on HDDS-572:
---

{quote} # (bucket creation) Can please help me to imagine how can it work? I 
think you need to use the ozone cli anyway just now instead of create volume + 
create bucket you should use create s3 bucket. Is it right?{quote}
There is a short term and long term for this. In both worlds, we do *not* need 
an ozone volume create command. In the short term we will use the 
AWSAccessKeyID as the user name and create a volume when the S3 create bucket 
call is made. In the long run, we will store S3 credentials or the provide a 
mechanism to the AWSAccessKeyID and Secret. It might that a particular use has 
an application that moves seamlessly between Ozone S3 and AWS S3. In that case, 
we will provide an interface that allows user to pick the AWSAccessKeyID and 
Secret.
{quote}2. I understand this. But sill you should now the endpoint. I can't see 
big difference between knowing the url endpoint and knowing the volume.
{quote}
endPoint is a S3 artifact, what we are trying to build is to be as close as the 
abstractions of S3. So yes, endpoint will remain. What we achieve is the Ozone 
abstraction of volumes leaking into S3 abstractions; and most importantly the 
ability to behave like S3. I have already discussed that earlier.
{quote}3.1. My expectation was just to list all the available buckets inside a 
volume. Anything wrong with this approach?
{quote}
Nothing wrong, but that is *not* the S3 experience. If I have a AWS Access Key 
ID and secret, I should be able to list all buckets I have. That is what S3 
behavior is. We will be able to provide that same experience with this patch.
{quote}3.2. Yes, I understand it. If we need to support the cluster uniq bucket 
names, we need this table
{quote}
Yes, that is another important point why we want to move to this model.

 
{quote}4. As I wrote earlier I can't see big difference between remember 
'cluster + s3 bucketname' and remember 'volume + bucketname'
{quote}
The difference is that user knows a fixed number of Clusters or S3 regions. 
They are countable – and there is a well know list.

On the other hand, the list of potential volume names is infinite (not really, 
but practically :)). So providing the cluster + s3 bucket name is a really 
important step for user experience.
{quote}5. What does it mean 'mount an S3 bucket via Ozone FS'? Currently we can 
access volume/bucket without ozonefs. I think it's easy enough.
{quote}
What I meant was; if and when a user want to mount or access an S3 bucket via 
ozone – They can ask the cluster what is the Ozone path to use. In the current 
model, we force users to remember that. In this model, user has a mechanism to 
discover that.
{quote}6. I think the security credentials also could be added to the bucket as 
a metadata.
{quote}
Eventually, maybe. As I mentioned in the first point, we might need to have a 
S3 creds store – but we will cross that bridge when we get there.
{quote}While I am not against this patch, I can't see the immediate benefit. 
For me it's an additional complexity but we can handle almost all the problems 
in a more simple way (maybe with some compromise).
{quote}
Once we release a software with compromise, we can never call it back. Someone 
somewhere will be using it and it becomes very hard to break that. Also in this 
case, if a 25KB patch can solve the compromise issue, why not?
{quote}Other question: why is it part of OM? Why don't we put it to the s3 
gateway?So 
{quote}
This is really a metadata for the Object Store. If you put it in S3gateway, we 
will have to support HA for S3Gateway. Right now, S3Gateway is stateless and 
OzoneManager is stateful. So it makes sense to add this state to the metadata 
server. Then when Ozone manager becomes HA enabled, this data also will be 
replicated.

Sorry for the delay in replying I somehow missed the comments after then 
commit. [~bharatviswa] Thanks for reminding me to reply to these questions.

 

 

 

 

 

 

 

> Support S3 buckets as first class objects in Ozone Manager - 1
> --
>
> Key: HDDS-572
> URL: https://issues.apache.org/jira/browse/HDDS-572
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-572.001.patch, HDDS-572.002.patch
>
>
> This Jira proposes to add support for S3 buckets as first class objects in 
> Ozone Manager. Currently we take the Ozone volume via the endPoint URL in AWS 
> sdk. With this(and the next 2 patchs), we can move away from using ozone 
> volume in the URL.
> cc: [~elek], [~bharatviswa]



--
This message was sent by Atlassian JIRA

[jira] [Updated] (HDDS-590) Add unit test for HDDS-583

2018-10-08 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-590:
--
Attachment: HDDS-590..002.patch
Status: Patch Available  (was: Open)

> Add unit test for HDDS-583
> --
>
> Key: HDDS-590
> URL: https://issues.apache.org/jira/browse/HDDS-590
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-590..001.patch, HDDS-590..002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-590) Add unit test for HDDS-583

2018-10-08 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-590:
--
Status: Open  (was: Patch Available)

> Add unit test for HDDS-583
> --
>
> Key: HDDS-590
> URL: https://issues.apache.org/jira/browse/HDDS-590
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Attachments: HDDS-590..001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13878) HttpFS: Implement GETSNAPSHOTTABLEDIRECTORYLIST

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642608#comment-16642608
 ] 

Hadoop QA commented on HDFS-13878:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-httpfs: The 
patch generated 1 new + 443 unchanged - 0 fixed = 444 total (was 443) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
29s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13878 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942915/HDFS-13878.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9b7408a38058 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 347ea38 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25231/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25231/testReport/ |
| Max. process+thread count | 647 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25231/console |
| Powered by | 

[jira] [Commented] (HDFS-13926) ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads

2018-10-08 Thread Hrishikesh Gadre (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642594#comment-16642594
 ] 

Hrishikesh Gadre commented on HDFS-13926:
-

[~xiaochen] please note that the latest patch (HDFS-13926-003.patch) fixes the 
javac warning. Also the unit test failures are unrelated to this change (either 
flaky tests or known issues).

> ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped 
> reads
> ---
>
> Key: HDFS-13926
> URL: https://issues.apache.org/jira/browse/HDFS-13926
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Hrishikesh Gadre
>Priority: Major
> Attachments: HDFS-13926-002.patch, HDFS-13926-003.patch, 
> HDFS-13926.01.patch, HDFS-13926.prelim.patch
>
>
> During some integration testing, [~nsheth] found out that per-thread read 
> stats for EC is incorrect. This is due to the striped reads are done 
> asynchronously on the worker threads.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-523) Implement DeleteObject REST endpoint

2018-10-08 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642592#comment-16642592
 ] 

Bharat Viswanadham commented on HDDS-523:
-

Hi [~elek]

Thanks for the patch.

Code changes for DeleteObject LGTM.

*I have a few minor comments on robot tests.*

1. ${BUCKET} is set in Create volume and bucket for the tests. This will be 
called only if

OZONE_TEST to true. I think we need to make ${BUCKET} also variable to make 
this work both on s3gateway and ozone cluster. Otherwise ${BUCKET} will not be 
set when OZONE_TESTS is set to false.

 

2. Should not contain ${result} 500 in awscli.robot tests, why do we need this 
change?

 

> Implement DeleteObject REST endpoint
> 
>
> Key: HDDS-523
> URL: https://issues.apache.org/jira/browse/HDDS-523
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-523.001.patch
>
>
> Simple delete Object call.
> Implemented by HDDS-444 without the acceptance tests.
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectDELETE.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13926) ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads

2018-10-08 Thread Hrishikesh Gadre (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated HDFS-13926:

Attachment: HDFS-13926-003.patch

> ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped 
> reads
> ---
>
> Key: HDFS-13926
> URL: https://issues.apache.org/jira/browse/HDFS-13926
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Hrishikesh Gadre
>Priority: Major
> Attachments: HDFS-13926-002.patch, HDFS-13926-003.patch, 
> HDFS-13926.01.patch, HDFS-13926.prelim.patch
>
>
> During some integration testing, [~nsheth] found out that per-thread read 
> stats for EC is incorrect. This is due to the striped reads are done 
> asynchronously on the worker threads.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-478) Log files related to each daemon doesn't have proper startup and shutdown logs

2018-10-08 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HDDS-478:
--

Assignee: Dinesh Chitlangia

> Log files related to each daemon doesn't have proper startup and shutdown logs
> --
>
> Key: HDDS-478
> URL: https://issues.apache.org/jira/browse/HDDS-478
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.2.1
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
>
> All the logs (startup/shutdown messages) go into ozone.log. We have a 
> separate log file for each daemon and that log file doesn't contain these 
> logs. 
> {noformat}
> [root@ctr-e138-1518143905142-468367-01-02 logs]# cat ozone.log.2018-09-16 
> | head -20
> 2018-09-16 05:29:59,638 [main] INFO (LogAdapter.java:51) - STARTUP_MSG:
> /
> STARTUP_MSG: Starting OzoneManager
> STARTUP_MSG: host = 
> ctr-e138-1518143905142-468367-01-02.hwx.site/172.27.68.129
> STARTUP_MSG: args = [-createObjectStore]
> STARTUP_MSG: version = 3.2.0-SNAPSHOT
> STARTUP_MSG: classpath = 
> 

[jira] [Commented] (HDFS-13956) iNotify should include information to identify a file as either replicated or erasure coded

2018-10-08 Thread Hrishikesh Gadre (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642585#comment-16642585
 ] 

Hrishikesh Gadre commented on HDFS-13956:
-

[~jojochuang] the failing tests in the latest run are either known issue 
(HDFS-11396) or flaky tests. Please take a look and let me know.

> iNotify should include information to identify a file as either replicated or 
> erasure coded
> ---
>
> Key: HDFS-13956
> URL: https://issues.apache.org/jira/browse/HDFS-13956
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
>Priority: Minor
> Attachments: HDFS-13956-001.patch, HDFS-13956-002.patch, 
> HDFS-13956-003.patch, HDFS-13956-004.patch, HDFS-13956-005.patch
>
>
> Currently iNotify does not provide information to identify if a given file is 
> using replication or erasure coding mode. This would be very useful for the 
> downstream applications using iNotify functionality (e.g. to tag/search files 
> using erasure coding).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13926) ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads

2018-10-08 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen reassigned HDFS-13926:


Assignee: Hrishikesh Gadre  (was: Xiao Chen)

> ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped 
> reads
> ---
>
> Key: HDFS-13926
> URL: https://issues.apache.org/jira/browse/HDFS-13926
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Hrishikesh Gadre
>Priority: Major
> Attachments: HDFS-13926-002.patch, HDFS-13926.01.patch, 
> HDFS-13926.prelim.patch
>
>
> During some integration testing, [~nsheth] found out that per-thread read 
> stats for EC is incorrect. This is due to the striped reads are done 
> asynchronously on the worker threads.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-598) Volume Delete operation fails if volume name is not suffixed with /

2018-10-08 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-598.
---
Resolution: Duplicate

[~hanishakoneru] Thanks for filing this issue. I believe that this is possibly 
a duplicate of HDDS-568. My apologies that the title of 568 is not as clear as 
it should be. 

 

> Volume Delete operation fails if volume name is not suffixed with /
> ---
>
> Key: HDDS-598
> URL: https://issues.apache.org/jira/browse/HDDS-598
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Priority: Blocker
>  Labels: newbie
>
> If we try to delete a volume without specifying the leading '/', then the 
> delete operation fails
> {code:java}
> $ bin/ozone sh volume create vol1 -u xyz
> $ bin/ozone sh volume delete vol1
> Delete Volume failed, error:VOLUME_NOT_FOUND
> $ bin/ozone sh volume delete /vol1
> Volume vol1 is deleted{code}
> In {{DeleteVolumeHandler.java}}, the first character in volume name is 
> skipped.
> {code:java}
> // we need to skip the slash in the URI path
> String volumeName = ozoneURI.getPath().substring(1);{code}
> We should only skip the leading /'s while interpreting the volume name. 
> Similar to how we interpret volume name in CreateVolumeHandler.
> {code:java}
> String volumeName = ozoneURI.getPath().replaceAll("^/+", "");{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-478) Log files related to each daemon doesn't have proper startup and shutdown logs

2018-10-08 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642583#comment-16642583
 ] 

Hanisha Koneru commented on HDDS-478:
-

I am getting the startup messages in the expected daemon .log file.

But the OMAudit logs are being written to {{om-audit.log}} and also to 
{{hadoop--om-<>.local.out}} file. We should be writing the audit logs 
only to the om-audit.log file.

> Log files related to each daemon doesn't have proper startup and shutdown logs
> --
>
> Key: HDDS-478
> URL: https://issues.apache.org/jira/browse/HDDS-478
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.2.1
>Reporter: Nilotpal Nandi
>Priority: Major
>  Labels: alpha2
>
> All the logs (startup/shutdown messages) go into ozone.log. We have a 
> separate log file for each daemon and that log file doesn't contain these 
> logs. 
> {noformat}
> [root@ctr-e138-1518143905142-468367-01-02 logs]# cat ozone.log.2018-09-16 
> | head -20
> 2018-09-16 05:29:59,638 [main] INFO (LogAdapter.java:51) - STARTUP_MSG:
> /
> STARTUP_MSG: Starting OzoneManager
> STARTUP_MSG: host = 
> ctr-e138-1518143905142-468367-01-02.hwx.site/172.27.68.129
> STARTUP_MSG: args = [-createObjectStore]
> STARTUP_MSG: version = 3.2.0-SNAPSHOT
> STARTUP_MSG: classpath = 
> 

[jira] [Updated] (HDFS-13976) Backport HDFS-12813 to branch-2.9

2018-10-08 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-13976:
--
Attachment: HDFS-12813.branch-2.9.001.patch

> Backport HDFS-12813 to branch-2.9
> -
>
> Key: HDFS-13976
> URL: https://issues.apache.org/jira/browse/HDFS-13976
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, hdfs-client
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 2.9.2
>
> Attachments: HDFS-12813.branch-2.001.patch, 
> HDFS-12813.branch-2.9.001.patch
>
>
> 2.9 also shows the issue from HDFS-12813:
> HDFS-11395 fixed the problem where the MultiException thrown by 
> RequestHedgingProxyProvider was hidden. However when the target proxy size is 
> 1, then unwrapping is not done for the InvocationTargetException. for target 
> proxy size of 1, the unwrapping should be done till first level where as for 
> multiple proxy size, it should be done at 2 levels.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-598) Volume Delete operation fails if volume name is not suffixed with /

2018-10-08 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-598:
---

 Summary: Volume Delete operation fails if volume name is not 
suffixed with /
 Key: HDDS-598
 URL: https://issues.apache.org/jira/browse/HDDS-598
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Hanisha Koneru


If we try to delete a volume without specifying the leading '/', then the 
delete operation fails
{code:java}
$ bin/ozone sh volume create vol1 -u xyz

$ bin/ozone sh volume delete vol1
Delete Volume failed, error:VOLUME_NOT_FOUND

$ bin/ozone sh volume delete /vol1
Volume vol1 is deleted{code}
In {{DeleteVolumeHandler.java}}, the first character in volume name is skipped.
{code:java}
// we need to skip the slash in the URI path
String volumeName = ozoneURI.getPath().substring(1);{code}
We should only skip the leading /'s while interpreting the volume name. Similar 
to how we interpret volume name in CreateVolumeHandler.
{code:java}
String volumeName = ozoneURI.getPath().replaceAll("^/+", "");{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13976) Backport HDFS-12813 to branch-2.9

2018-10-08 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642575#comment-16642575
 ] 

Íñigo Goiri commented on HDFS-13976:


This patch is already in branch-2.
[~lukmajercak] can you post the patch for branch-2.9?
I don't think Yetus runs for either branch though.
Can you post the results of the test in your builder just to verify?

> Backport HDFS-12813 to branch-2.9
> -
>
> Key: HDFS-13976
> URL: https://issues.apache.org/jira/browse/HDFS-13976
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, hdfs-client
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 2.9.2
>
> Attachments: HDFS-12813.branch-2.001.patch
>
>
> 2.9 also shows the issue from HDFS-12813:
> HDFS-11395 fixed the problem where the MultiException thrown by 
> RequestHedgingProxyProvider was hidden. However when the target proxy size is 
> 1, then unwrapping is not done for the InvocationTargetException. for target 
> proxy size of 1, the unwrapping should be done till first level where as for 
> multiple proxy size, it should be done at 2 levels.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12284) RBF: Support for Kerberos authentication

2018-10-08 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12284:
---
Summary: RBF: Support for Kerberos authentication  (was: rjlvgkuerbrueunvu)

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284-HDFS-13532.004.patch, 
> HDFS-12284-HDFS-13532.005.patch, HDFS-12284-HDFS-13532.006.patch, 
> HDFS-12284-HDFS-13532.007.patch, HDFS-12284-HDFS-13532.008.patch, 
> HDFS-12284.000.patch, HDFS-12284.001.patch, HDFS-12284.002.patch, 
> HDFS-12284.003.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12284) rjlvgkuerbrueunvu

2018-10-08 Thread Sherwood Zheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sherwood Zheng updated HDFS-12284:
--
Summary: rjlvgkuerbrueunvu  (was: ngnujfnchljgnfnrtbe)

> rjlvgkuerbrueunvu
> -
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284-HDFS-13532.004.patch, 
> HDFS-12284-HDFS-13532.005.patch, HDFS-12284-HDFS-13532.006.patch, 
> HDFS-12284-HDFS-13532.007.patch, HDFS-12284-HDFS-13532.008.patch, 
> HDFS-12284.000.patch, HDFS-12284.001.patch, HDFS-12284.002.patch, 
> HDFS-12284.003.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12284) ngnujfnchljgnfnrtbe

2018-10-08 Thread Sherwood Zheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sherwood Zheng updated HDFS-12284:
--
Summary: ngnujfnchljgnfnrtbe  (was: fhihtteurlhlrudvvelcf)

> ngnujfnchljgnfnrtbe
> ---
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284-HDFS-13532.004.patch, 
> HDFS-12284-HDFS-13532.005.patch, HDFS-12284-HDFS-13532.006.patch, 
> HDFS-12284-HDFS-13532.007.patch, HDFS-12284-HDFS-13532.008.patch, 
> HDFS-12284.000.patch, HDFS-12284.001.patch, HDFS-12284.002.patch, 
> HDFS-12284.003.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12284) 399625

2018-10-08 Thread Sherwood Zheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sherwood Zheng updated HDFS-12284:
--
Summary: 399625  (was: RBF: Support for Kerberos authentication)

> 399625
> --
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284-HDFS-13532.004.patch, 
> HDFS-12284-HDFS-13532.005.patch, HDFS-12284-HDFS-13532.006.patch, 
> HDFS-12284-HDFS-13532.007.patch, HDFS-12284-HDFS-13532.008.patch, 
> HDFS-12284.000.patch, HDFS-12284.001.patch, HDFS-12284.002.patch, 
> HDFS-12284.003.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12284) fhihtteurlhlrudvvelcf

2018-10-08 Thread Sherwood Zheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sherwood Zheng updated HDFS-12284:
--
Summary: fhihtteurlhlrudvvelcf  (was: 399625)

> fhihtteurlhlrudvvelcf
> -
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284-HDFS-13532.004.patch, 
> HDFS-12284-HDFS-13532.005.patch, HDFS-12284-HDFS-13532.006.patch, 
> HDFS-12284-HDFS-13532.007.patch, HDFS-12284-HDFS-13532.008.patch, 
> HDFS-12284.000.patch, HDFS-12284.001.patch, HDFS-12284.002.patch, 
> HDFS-12284.003.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13977) NameNode can kill itself if it tries to send too many txns to a QJM simultaneously

2018-10-08 Thread Erik Krogen (JIRA)
Erik Krogen created HDFS-13977:
--

 Summary: NameNode can kill itself if it tries to send too many 
txns to a QJM simultaneously
 Key: HDFS-13977
 URL: https://issues.apache.org/jira/browse/HDFS-13977
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode, qjm
Affects Versions: 2.7.7
Reporter: Erik Krogen
Assignee: Erik Krogen


h3. Problem & Logs
We recently encountered an issue on a large cluster (running 2.7.4) in which 
the NameNode killed itself because it was unable to communicate with the JNs 
via QJM. We discovered that it was the result of the NameNode trying to send a 
huge batch of over 1 million transactions to the JNs in a single RPC:
{code:title=NameNode Logs}
WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Remote 
journal X.X.X.X: failed to
 write txns 1000-11153636. Will try to write to this JN again after the 
next log roll.
...
WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Took 1098ms 
to send a batch of 1153637 edits (335886611 bytes) to remote journal 
X.X.X.X:
{code}
{code:title=JournalNode Logs}
INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for port 8485: 
readAndProcess from client X.X.X.X threw exception [java.io.IOException: 
Requested data length 335886776 is longer than maximum configured RPC length 
67108864.  RPC came from X.X.X.X]
java.io.IOException: Requested data length 335886776 is longer than maximum 
configured RPC length 67108864.  RPC came from X.X.X.X
at 
org.apache.hadoop.ipc.Server$Connection.checkDataLength(Server.java:1610)
at 
org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1672)
at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:897)
at 
org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:753)
at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724)
{code}
The JournalNodes rejected the RPC because it had a size well over the 64MB 
default {{ipc.maximum.data.length}}.

This was triggered by a huge number of files all hitting a hard lease timeout 
simultaneously, causing the NN to force-close them all at once. This can be a 
particularly nasty bug as the NN will attempt to re-send this same huge RPC on 
restart, as it loads an fsimage which still has all of these open files that 
need to be force-closed.

h3. Proposed Solution
To solve this we propose to modify {{EditsDoubleBuffer}} to add a "hard limit" 
based on the value of {{ipc.maximum.data.length}}. When {{writeOp()}} or 
{{writeRaw()}} is called, first check the size of {{bufCurrent}}. If it exceeds 
the hard limit, block the writer until the buffer is flipped and {{bufCurrent}} 
becomes {{bufReady}}. This gives some self-throttling to prevent the NameNode 
from killing itself in this way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13976) Backport HDFS-12813 to branch-2.9

2018-10-08 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13976:
---
Description: 
2.9 also shows the issue from HDFS-12813:
HDFS-11395 fixed the problem where the MultiException thrown by 
RequestHedgingProxyProvider was hidden. However when the target proxy size is 
1, then unwrapping is not done for the InvocationTargetException. for target 
proxy size of 1, the unwrapping should be done till first level where as for 
multiple proxy size, it should be done at 2 levels.

> Backport HDFS-12813 to branch-2.9
> -
>
> Key: HDFS-13976
> URL: https://issues.apache.org/jira/browse/HDFS-13976
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, hdfs-client
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 2.9.2
>
> Attachments: HDFS-12813.branch-2.001.patch
>
>
> 2.9 also shows the issue from HDFS-12813:
> HDFS-11395 fixed the problem where the MultiException thrown by 
> RequestHedgingProxyProvider was hidden. However when the target proxy size is 
> 1, then unwrapping is not done for the InvocationTargetException. for target 
> proxy size of 1, the unwrapping should be done till first level where as for 
> multiple proxy size, it should be done at 2 levels.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13976) Backport HDFS-12813 to branch-2.9

2018-10-08 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-13976:
--
Attachment: HDFS-12813.branch-2.001.patch

> Backport HDFS-12813 to branch-2.9
> -
>
> Key: HDFS-13976
> URL: https://issues.apache.org/jira/browse/HDFS-13976
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, hdfs-client
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 2.9.2
>
> Attachments: HDFS-12813.branch-2.001.patch
>
>
> 2.9 also shows the issue from HDFS-12813:
> HDFS-11395 fixed the problem where the MultiException thrown by 
> RequestHedgingProxyProvider was hidden. However when the target proxy size is 
> 1, then unwrapping is not done for the InvocationTargetException. for target 
> proxy size of 1, the unwrapping should be done till first level where as for 
> multiple proxy size, it should be done at 2 levels.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13976) Backport HDFS-12813 to branch-2.9

2018-10-08 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HDFS-13976:
--

Assignee: Lukas Majercak

> Backport HDFS-12813 to branch-2.9
> -
>
> Key: HDFS-13976
> URL: https://issues.apache.org/jira/browse/HDFS-13976
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, hdfs-client
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 2.9.2
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13976) Backport HDFS-12813 to branch-2.9

2018-10-08 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-13976:
--
Fix Version/s: 2.9.2

> Backport HDFS-12813 to branch-2.9
> -
>
> Key: HDFS-13976
> URL: https://issues.apache.org/jira/browse/HDFS-13976
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, hdfs-client
>Reporter: Lukas Majercak
>Priority: Major
> Fix For: 2.9.2
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13976) Backport HDFS-12813 to branch-2.9

2018-10-08 Thread Lukas Majercak (JIRA)
Lukas Majercak created HDFS-13976:
-

 Summary: Backport HDFS-12813 to branch-2.9
 Key: HDFS-13976
 URL: https://issues.apache.org/jira/browse/HDFS-13976
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs, hdfs-client
Reporter: Lukas Majercak






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-597) Ratis: Support secure gRPC endpoint with mTLS for Ratis

2018-10-08 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-597:
---

 Summary: Ratis: Support secure gRPC endpoint with mTLS for Ratis
 Key: HDDS-597
 URL: https://issues.apache.org/jira/browse/HDDS-597
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-596) Add robot test for OM Block Token

2018-10-08 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-596:
---

 Summary: Add robot test for OM Block Token 
 Key: HDDS-596
 URL: https://issues.apache.org/jira/browse/HDDS-596
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-595) Add robot test for OM Delegation Token

2018-10-08 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-595:
---

 Summary: Add robot test for OM Delegation Token 
 Key: HDDS-595
 URL: https://issues.apache.org/jira/browse/HDDS-595
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-591) Exclude kadm5.acl from ASF license check

2018-10-08 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-591:
---

Assignee: Ajay Kumar  (was: Xiaoyu Yao)

> Exclude kadm5.acl from ASF license check
> 
>
> Key: HDDS-591
> URL: https://issues.apache.org/jira/browse/HDDS-591
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
>
>  This was reported from recent Jenkins run: 
> {code}
> !? 
> /testptch/hadoop/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/docker-krb5/kadm5.acl
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-115) GRPC: Support secure gRPC endpoint with mTLS

2018-10-08 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-115:

Summary: GRPC: Support secure gRPC endpoint with mTLS   (was: Support 
secure gRPC endpoint with mTLS )

> GRPC: Support secure gRPC endpoint with mTLS 
> -
>
> Key: HDDS-115
> URL: https://issues.apache.org/jira/browse/HDDS-115
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-593) SCM CA: DN changes to use cert issued by SCM for GRPC mTLS

2018-10-08 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-593:

Summary: SCM CA: DN changes to use cert issued by SCM for GRPC mTLS  (was: 
SCM CA: DN changes to use cert issued by SCM)

> SCM CA: DN changes to use cert issued by SCM for GRPC mTLS
> --
>
> Key: HDDS-593
> URL: https://issues.apache.org/jira/browse/HDDS-593
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-593) GRPC: DN changes to use cert issued by SCM for GRPC mTLS

2018-10-08 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-593:

Summary: GRPC: DN changes to use cert issued by SCM for GRPC mTLS  (was: 
SCM CA: DN changes to use cert issued by SCM for GRPC mTLS)

> GRPC: DN changes to use cert issued by SCM for GRPC mTLS
> 
>
> Key: HDDS-593
> URL: https://issues.apache.org/jira/browse/HDDS-593
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-584) OzoneFS with HDP failing to run YARN jobs

2018-10-08 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642533#comment-16642533
 ] 

Arpit Agarwal commented on HDDS-584:


Is there a YARN configuration setting to turn off this check for the time being?

> OzoneFS with HDP failing to run YARN jobs
> -
>
> Key: HDDS-584
> URL: https://issues.apache.org/jira/browse/HDDS-584
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
> Environment: OS - RHEL7.3
> Openstack based VMs : 3 Node HDP, 3 Node Ozone
>Reporter: Soumitra Sulav
>Priority: Major
>
> YARN jobs are failing on ozonefs with below exception :
> {code:java}
> java.io.IOException: The ownership on the staging directory 
> /tmp/hadoop-yarn/staging/hdfs/.staging is not as expected. It is owned by . 
> The directory must be owned by the submitter hdfs or hdfs
> at 
> org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:152)
> at 
> org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:113)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:151)
> at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
> at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588)
> at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
> at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
> at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
> {code}
> Example Job was run using below command both with user root & hdfs :
> {code:java}
> hadoop jar 
> /usr/hdp/3.0.0.0-1634/hadoop-mapreduce/hadoop-mapreduce-examples.jar 
> wordcount /hosts /tmp/hosts
> {code}
> YARN/MR Job is checking the file/folder ownership of the user staging 
> directory and if it doesn't matches with the user who is submitting the job, 
> it throws above exception.
> Ownership check happens in below file : 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java#L144]
> In HDDS/OzoneFS staging area is created accordingly but with no owner :
> {code:java}
> [root@hcatest-4 ~]# hdfs dfs -ls -R /tmp
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/root/ozone-0.3.0-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> 18/10/08 10:11:11 INFO conf.Configuration: Removed undeclared tags:
> drwxrwxrwx - 0 2018-10-04 11:20 /tmp/entity-file-history
> drwxrwxrwx - 0 2018-10-04 11:20 /tmp/entity-file-history/active
> drwxrwxrwx - 0 2018-10-05 08:55 /tmp/hadoop-yarn
> drwxrwxrwx - 0 2018-10-05 08:55 /tmp/hadoop-yarn/staging
> drwxrwxrwx - 0 2018-10-05 11:56 /tmp/hadoop-yarn/staging/hdfs
> drwxrwxrwx - 0 2018-10-05 11:56 /tmp/hadoop-yarn/staging/hdfs/.staging
> drwxrwxrwx - 0 2018-10-05 11:56 
> /tmp/hadoop-yarn/staging/hdfs/.staging/job_1538654387547_0002
> -rw-rw-rw- 1 316239 2018-10-05 11:56 
> /tmp/hadoop-yarn/staging/hdfs/.staging/job_1538654387547_0002/job.jar
> -rw-rw-rw- 1 104 2018-10-05 11:56 
> /tmp/hadoop-yarn/staging/hdfs/.staging/job_1538654387547_0002/job.split
> -rw-rw-rw- 1 23 2018-10-05 11:56 
> 

[jira] [Created] (HDDS-594) SCM CA: DN sends CSR and uses certificate issued by SCM

2018-10-08 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-594:
---

 Summary: SCM CA: DN sends CSR and uses certificate issued by SCM
 Key: HDDS-594
 URL: https://issues.apache.org/jira/browse/HDDS-594
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Ajay Kumar
Assignee: Xiaoyu Yao






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-134) SCM CA: OM sends CSR and uses certificate issued by SCM

2018-10-08 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-134:

Summary: SCM CA: OM sends CSR and uses certificate issued by SCM  (was: SCM 
CA: OM changes to use cert issued by SCM)

> SCM CA: OM sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-134
> URL: https://issues.apache.org/jira/browse/HDDS-134
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-593) SCM CA: DN changes to use cert issued by SCM

2018-10-08 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-593:
---

 Summary: SCM CA: DN changes to use cert issued by SCM
 Key: HDDS-593
 URL: https://issues.apache.org/jira/browse/HDDS-593
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Ajay Kumar
Assignee: Xiaoyu Yao






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-592) Fix ozone-secure.robot test

2018-10-08 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-592:
---

 Summary: Fix ozone-secure.robot test
 Key: HDDS-592
 URL: https://issues.apache.org/jira/browse/HDDS-592
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Ajay Kumar


The test need several changes to match with the trunk changes:

1. It is currently located in 
hadoop/hadoop-ozone/acceptance-test/src/test/robotframework/acceptance in 
HDDS-4 but in a different location on trunk.

2. "ozone sh" instead "ozone fs"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13523) Support observer nodes in MiniDFSCluster

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642524#comment-16642524
 ] 

Hadoop QA commented on HDFS-13523:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
40s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 17 unchanged - 0 fixed = 18 total (was 17) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 57s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}165m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestHdfsNativeCodeLoader |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13523 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942890/HDFS-13523-HDFS-12943.005.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 14fcd90dc38c 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-12943 / dc76e0f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25230/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Commented] (HDDS-550) Serialize ApplyTransaction calls per Container in ContainerStateMachine

2018-10-08 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642515#comment-16642515
 ] 

Xiaoyu Yao commented on HDDS-550:
-

Thanks [~shashikant] for working on this. The patch looks pretty good to me.

+1 for catch up with the latest Ratis release which includes the third-party 
shading change.

I just have one question for the other changes below:   

ContainerStateMachine.java:

Line 247: not clear to me why we need to remove it and then put it back. The 
original code only put without remove. Can you clarify?

{code}

247: writeChunkFuture.thenApply(r -> writeChunkFutureMap.remove(entryIndex));
248: writeChunkFutureMap.put(entryIndex, writeChunkFuture);

{code}

> Serialize ApplyTransaction calls per Container in ContainerStateMachine
> ---
>
> Key: HDDS-550
> URL: https://issues.apache.org/jira/browse/HDDS-550
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-550.001.patch, HDDS-550.002.patch
>
>
> As part of handling Node failures in Ozone, the block commit need to happen 
> in order inside ContainerStateMachine per container. With RATIS-341, it is 
> guaranteed that the  applyTransaction calls for committing the write chunks 
> will be initiated only when the WriteStateMachine data for write Chunk 
> operations finish. 
> This Jira is aimed at making all the applyTransaction operations inside 
> ContainerStateMachine serial per container with a single thread Executor per 
> container handling all applyTransactions calls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-569) Add proto changes required for CopyKey to support s3 put object -copy

2018-10-08 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642510#comment-16642510
 ] 

Ajay Kumar commented on HDDS-569:
-

Had offline discussion with [~bharatviswa] on this. My suggestion was to allow 
object copy by concept of alias. Allowing a given key to have many aliases and 
maintaining it in some data structure has additional advantages. A copy object 
can be just an operation to add an alias for a given key.

> Add proto changes required for CopyKey to support s3 put object -copy
> -
>
> Key: HDDS-569
> URL: https://issues.apache.org/jira/browse/HDDS-569
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-569.00.patch, HDDS-569.01.patch
>
>
> This Jira is the starter Jira to make changes required for copy key request 
> in S3 to support copy key across the bucket. In ozone world, this is just a 
> metadata change. This Jira is created to just change .proto file for copy key 
> request support.
>  
> [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13878) HttpFS: Implement GETSNAPSHOTTABLEDIRECTORYLIST

2018-10-08 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642508#comment-16642508
 ] 

Siyao Meng commented on HDFS-13878:
---

Rev 003: Added zero and one snapshottable dir test case. Fixed FSOperations 
return value that could cause JSON parser exception (in zero snapshottable dir 
case).

> HttpFS: Implement GETSNAPSHOTTABLEDIRECTORYLIST
> ---
>
> Key: HDFS-13878
> URL: https://issues.apache.org/jira/browse/HDFS-13878
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13878.001.patch, HDFS-13878.002.patch, 
> HDFS-13878.003.patch
>
>
> Implement GETSNAPSHOTTABLEDIRECTORYLIST  (from HDFS-13141) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13878) HttpFS: Implement GETSNAPSHOTTABLEDIRECTORYLIST

2018-10-08 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13878:
--
Attachment: HDFS-13878.003.patch
Status: Patch Available  (was: In Progress)

> HttpFS: Implement GETSNAPSHOTTABLEDIRECTORYLIST
> ---
>
> Key: HDFS-13878
> URL: https://issues.apache.org/jira/browse/HDFS-13878
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13878.001.patch, HDFS-13878.002.patch, 
> HDFS-13878.003.patch
>
>
> Implement GETSNAPSHOTTABLEDIRECTORYLIST  (from HDFS-13141) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13878) HttpFS: Implement GETSNAPSHOTTABLEDIRECTORYLIST

2018-10-08 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13878:
--
Status: In Progress  (was: Patch Available)

> HttpFS: Implement GETSNAPSHOTTABLEDIRECTORYLIST
> ---
>
> Key: HDFS-13878
> URL: https://issues.apache.org/jira/browse/HDFS-13878
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13878.001.patch, HDFS-13878.002.patch, 
> HDFS-13878.003.patch
>
>
> Implement GETSNAPSHOTTABLEDIRECTORYLIST  (from HDFS-13141) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-522) Implement PutBucket REST endpoint

2018-10-08 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-522 started by Bharat Viswanadham.
---
> Implement PutBucket REST endpoint
> -
>
> Key: HDDS-522
> URL: https://issues.apache.org/jira/browse/HDDS-522
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
>
> The create bucket creates a bucket for the give volume.
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUT.html
> Stub implementation is created as part of HDDS-444. Need to finalize, check 
> the missing headers, add acceptance tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13956) iNotify should include information to identify a file as either replicated or erasure coded

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642494#comment-16642494
 ] 

Hadoop QA commented on HDFS-13956:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
34s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 46s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}158m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.fs.TestHdfsNativeCodeLoader |
|   | hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13956 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942881/HDFS-13956-005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux cf44f6739e12 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDDS-587) Add new classes for pipeline management

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642473#comment-16642473
 ] 

Hadoop QA commented on HDDS-587:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 29s{color} | {color:orange} root: The patch generated 20 new + 0 unchanged - 
0 fixed = 20 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
17s{color} | {color:red} hadoop-hdds/common generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
8s{color} | {color:green} common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 38s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 24s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/common |
|  |  org.apache.hadoop.hdds.scm.pipeline.Pipeline defines equals and uses 
Object.hashCode()  At Pipeline.java:Object.hashCode()  At Pipeline.java:[lines 
118-127] |
| Failed junit tests | 

[jira] [Created] (HDFS-13975) TestBalancer#testMaxIterationTime fails sporadically

2018-10-08 Thread Jason Lowe (JIRA)
Jason Lowe created HDFS-13975:
-

 Summary: TestBalancer#testMaxIterationTime fails sporadically
 Key: HDFS-13975
 URL: https://issues.apache.org/jira/browse/HDFS-13975
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.2.0
Reporter: Jason Lowe


A number of precommit builds have seen this test fail like this:
{noformat}
java.lang.AssertionError: Unexpected iteration runtime: 4021ms > 3.5s
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.testMaxIterationTime(TestBalancer.java:1649)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-550) Serialize ApplyTransaction calls per Container in ContainerStateMachine

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642455#comment-16642455
 ] 

Hadoop QA commented on HDDS-550:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
56s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
56s{color} | {color:red} hadoop-hdds/container-service in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
25s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
23s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
20s{color} | {color:red} tools in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 18m  
8s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 18m  8s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
28s{color} | {color:green} root: The patch generated 0 new + 3 unchanged - 1 
fixed = 3 total (was 4) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
35s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
36s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
45s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
28s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
29s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
30s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  

[jira] [Commented] (HDFS-13925) Unit Test for transitioning between different states

2018-10-08 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642420#comment-16642420
 ] 

Konstantin Shvachko commented on HDFS-13925:


The patch is not applying. You probably worked on the branch before HDFS-13961.

> Unit Test for transitioning between different states
> 
>
> Key: HDFS-13925
> URL: https://issues.apache.org/jira/browse/HDFS-13925
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Sherwood Zheng
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-13925-HDFS-12943.000.patch, 
> HDFS-13925-HDFS-12943.001.patch, HDFS-13925-HDFS-12943.002.patch
>
>
> adding two unit tests:
> 1. Ensure that Active cannot be transitioned to Observer and vice versa.
> 2. Ensure that Observer can be transitioned to Standby and vice versa.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-591) Exclude kadm5.acl from ASF license check

2018-10-08 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-591:
---

 Summary: Exclude kadm5.acl from ASF license check
 Key: HDDS-591
 URL: https://issues.apache.org/jira/browse/HDDS-591
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


 This was reported from recent Jenkins run: 

{code}
!? 
/testptch/hadoop/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/docker-krb5/kadm5.acl
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12459) Fix revert: Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642370#comment-16642370
 ] 

Hadoop QA commented on HDFS-12459:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
2s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  3s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
181 unchanged - 1 fixed = 182 total (was 182) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}177m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.fs.TestHdfsNativeCodeLoader |
|   | hadoop.hdfs.TestMaintenanceState |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-12459 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917130/HDFS-12459.008.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8e12bb0e3dac 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | 

[jira] [Commented] (HDFS-13963) NN UI is broken with IE11

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642363#comment-16642363
 ] 

Hadoop QA commented on HDFS-13963:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
39m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13963 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942861/HDFS-13963-01.patch |
| Optional Tests |  dupname  asflicense  shadedclient  |
| uname | Linux cd7e1b71fb99 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 347ea38 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 338 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25229/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> NN UI is broken with IE11
> -
>
> Key: HDFS-13963
> URL: https://issues.apache.org/jira/browse/HDFS-13963
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, ui
>Affects Versions: 3.1.1
>Reporter: Daisuke Kobayashi
>Assignee: Ayush Saxena
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-13963-01.patch, Screen Shot 2018-10-05 at 
> 20.22.20.png
>
>
> Internet Explorer 11 cannot correctly display Namenode Web UI while the NN 
> itself starts successfully. I have confirmed this over 3.1.1 (latest release) 
> and 3.3.0-SNAPSHOT (current trunk) that the following message is shown.
> {code}
> Failed to retrieve data from /jmx?qry=java.lang:type=Memory, cause: 
> SyntaxError: Invalid character
> {code}
> Apparently, this is because {{dfshealth.html}} runs as IE9 mode by default.
> {code}
> 
> {code}
> Once the compatible mode is changed to IE11 through developer tool, it's 
> rendered correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-590) Add unit test for HDDS-583

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642357#comment-16642357
 ] 

Hadoop QA commented on HDDS-590:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-ozone/integration-test: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  5m 59s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.hdds.scm.container.TestContainerStateManagerIntegration |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-590 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942877/HDDS-590..001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 60d78c426860 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 347ea38 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| checkstyle | 

[jira] [Commented] (HDDS-586) Incorrect url's in Ozone website

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642356#comment-16642356
 ] 

Hadoop QA commented on HDDS-586:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDDS-586 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-586 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942892/HDDS-586.001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1307/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Incorrect url's in Ozone website
> 
>
> Key: HDDS-586
> URL: https://issues.apache.org/jira/browse/HDDS-586
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: website
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Major
> Attachments: HDDS-586.001.patch, image-2018-10-08-22-21-14-025.png
>
>
> Below section in webesite 
>  !image-2018-10-08-22-21-14-025.png! 
> is pointing to incorrect url's 
> https://hadoop.apache.org/downloads/ and 
> https://hadoop.apache.org/docs/latest/runningviadocker.html 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-445) Create a logger to print out all of the incoming requests

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642355#comment-16642355
 ] 

Hadoop QA commented on HDDS-445:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
36m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-445 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942876/HDDS-445.01.patch |
| Optional Tests |  asflicense  mvnsite  unit  |
| uname | Linux 225a7e1a1310 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 347ea38 |
| maven | version: Apache Maven 3.3.9 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1305/testReport/ |
| Max. process+thread count | 340 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1305/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Create a logger to print out all of the incoming requests
> -
>
> Key: HDDS-445
> URL: https://issues.apache.org/jira/browse/HDDS-445
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-445.00.patch, HDDS-445.01.patch
>
>
> For the Http servier of HDDS-444 we need an option to print out all the 
> HttpRequests (header + body).
> To create a 100% s3 compatible interface, we need to test it with multiple 
> external tools (such as s3cli). While mitmproxy is always our best friend, to 
> make it more easier to identify the problems we need a method to log all the 
> incoming requests with a logger which could be turned on.
> Most probably we already have such kind of filter in hadoop/jetty the only 
> thing what we need is to configure it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-588) SelfSignedCertificate#generateCertificate should sign the certificate the configured security provider

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642353#comment-16642353
 ] 

Hadoop QA commented on HDDS-588:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
43s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
28s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-588 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942875/HDDS-588-HDDS-4.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 48dfc6986569 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-4 / 8109215 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1304/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1304/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 394 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common U: hadoop-hdds/common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1304/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was 

[jira] [Commented] (HDFS-13970) CacheManager Directives Map

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642351#comment-16642351
 ] 

Hadoop QA commented on HDFS-13970:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 34 unchanged - 0 fixed = 37 total (was 34) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}109m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}169m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.fs.TestHdfsNativeCodeLoader |
|   | hadoop.hdfs.TestDFSClientRetries |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13970 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942849/HDFS-13970.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b97e6af598c8 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d7c7f68 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (HDFS-12459) Fix revert: Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642341#comment-16642341
 ] 

Hadoop QA commented on HDFS-12459:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
2s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  7s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
180 unchanged - 1 fixed = 181 total (was 181) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}159m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.fs.TestHdfsNativeCodeLoader |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-12459 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917130/HDFS-12459.008.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6a58c38d4f2d 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d7c7f68 |
| maven | version: Apache Maven 

[jira] [Commented] (HDDS-568) Ozone sh unable to delete volume

2018-10-08 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642325#comment-16642325
 ] 

Lokesh Jain commented on HDDS-568:
--

[~dineshchitlangia] Thanks for working on this! The patch looks good to me. 
Please find my comments below.
 # Can you please add a unit test for the fix?
 # We can have a function defined in Handler class for verifying volumeName 
because both InfoVolumeHandler and DeleteVolumeHandler implement the same 
functionality. Can you also make changes in UpdateVolumeHandler? 

> Ozone sh unable to delete volume
> 
>
> Key: HDDS-568
> URL: https://issues.apache.org/jira/browse/HDDS-568
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.2.1
>Reporter: Soumitra Sulav
>Assignee: Dinesh Chitlangia
>Priority: Blocker
> Attachments: HDDS-568.001.patch
>
>
> Ozone filesystem volume isn't getting deleted even though the underlying 
> bucket is deleted and is currently empty.
> ozone sh command throws an error : VOLUME_NOT_FOUND even though its there
> On trying to create again it says : error:VOLUME_ALREADY_EXISTS (as expected).
> {code:java}
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh bucket list fstestvol
> [ ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume delete fstestvol
> Delete Volume failed, error:VOLUME_NOT_FOUND
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume list
> [ {
>   "owner" : {
> "name" : "root"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 1048576
>   },
>   "volumeName" : "fstestvol",
>   "createdOn" : "Fri, 21 Sep 2018 11:19:23 GMT",
>   "createdBy" : "root"
> } ]
> [root@hcatest-1 ozone-0.3.0-SNAPSHOT]# ozone sh volume create fstestvol 
> -u=hdfs
> 2018-10-03 10:14:49,151 [main] INFO - Creating Volume: fstestvol, with hdfs 
> as owner and quota set to 1152921504606846976 bytes.
> Volume creation failed, error:VOLUME_ALREADY_EXISTS
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-586) Incorrect url's in Ozone website

2018-10-08 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642307#comment-16642307
 ] 

Sandeep Nemuri commented on HDDS-586:
-

Attaching the patch with necessary changes. Kindly review. 

> Incorrect url's in Ozone website
> 
>
> Key: HDDS-586
> URL: https://issues.apache.org/jira/browse/HDDS-586
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: website
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Major
> Attachments: HDDS-586.001.patch, image-2018-10-08-22-21-14-025.png
>
>
> Below section in webesite 
>  !image-2018-10-08-22-21-14-025.png! 
> is pointing to incorrect url's 
> https://hadoop.apache.org/downloads/ and 
> https://hadoop.apache.org/docs/latest/runningviadocker.html 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-586) Incorrect url's in Ozone website

2018-10-08 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated HDDS-586:

Status: Patch Available  (was: Open)

> Incorrect url's in Ozone website
> 
>
> Key: HDDS-586
> URL: https://issues.apache.org/jira/browse/HDDS-586
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: website
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Major
> Attachments: HDDS-586.001.patch, image-2018-10-08-22-21-14-025.png
>
>
> Below section in webesite 
>  !image-2018-10-08-22-21-14-025.png! 
> is pointing to incorrect url's 
> https://hadoop.apache.org/downloads/ and 
> https://hadoop.apache.org/docs/latest/runningviadocker.html 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >