[jira] [Commented] (HDFS-14336) Fix checkstyle for NameNodeMXBean

2019-03-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784173#comment-16784173
 ] 

Hudson commented on HDFS-14336:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16124 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16124/])
HDFS-14336. Fix checkstyle for NameNodeMXBean. Contributed by Danny (inigoiri: 
rev 4b7313e640c8d1d6ea74f3483a2b25d35206e539)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeMXBean.java


> Fix checkstyle for NameNodeMXBean
> -
>
> Key: HDFS-14336
> URL: https://issues.apache.org/jira/browse/HDFS-14336
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Danny Becker
>Assignee: Danny Becker
>Priority: Trivial
> Fix For: 3.3.0
>
> Attachments: HDFS-14336.000.patch, HDFS-14336.001.patch
>
>
> Fix checkstyle in NameNodeMXBean.java and make it more uniform.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14336) Fix checkstyle for NameNodeMXBean

2019-03-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784168#comment-16784168
 ] 

Íñigo Goiri commented on HDFS-14336:


+1 on  [^HDFS-14336.001.patch].
The unit test failed is unrelated.
This clears 51 check style issues in the class.

Thanks [~dannytbecker] for the patch.
Committed to trunk.

> Fix checkstyle for NameNodeMXBean
> -
>
> Key: HDFS-14336
> URL: https://issues.apache.org/jira/browse/HDFS-14336
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Danny Becker
>Assignee: Danny Becker
>Priority: Trivial
> Fix For: 3.3.0
>
> Attachments: HDFS-14336.000.patch, HDFS-14336.001.patch
>
>
> Fix checkstyle in NameNodeMXBean.java and make it more uniform.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14326) Add CorruptFilesCount to JMX

2019-03-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784169#comment-16784169
 ] 

Íñigo Goiri commented on HDFS-14326:


HDFS-14336 is already committed, can you rebase the patch?
We should be able to clear all the check style issues now.

> Add CorruptFilesCount to JMX
> 
>
> Key: HDFS-14326
> URL: https://issues.apache.org/jira/browse/HDFS-14326
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs, metrics, namenode
>Reporter: Danny Becker
>Assignee: Danny Becker
>Priority: Minor
> Attachments: HDFS-14326.000.patch, HDFS-14326.001.patch, 
> HDFS-14326.002.patch
>
>
> Add CorruptFilesCount to JMX



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14336) Fix checkstyle for NameNodeMXBean

2019-03-04 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14336:
---
Summary: Fix checkstyle for NameNodeMXBean  (was: Fix checkstyle for 
NameNodeMXBean.java)

> Fix checkstyle for NameNodeMXBean
> -
>
> Key: HDFS-14336
> URL: https://issues.apache.org/jira/browse/HDFS-14336
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Danny Becker
>Assignee: Danny Becker
>Priority: Trivial
> Attachments: HDFS-14336.000.patch, HDFS-14336.001.patch
>
>
> Fix checkstyle in NameNodeMXBean.java and make it more uniform.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14336) Fix checkstyle for NameNodeMXBean

2019-03-04 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14336:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> Fix checkstyle for NameNodeMXBean
> -
>
> Key: HDFS-14336
> URL: https://issues.apache.org/jira/browse/HDFS-14336
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Danny Becker
>Assignee: Danny Becker
>Priority: Trivial
> Fix For: 3.3.0
>
> Attachments: HDFS-14336.000.patch, HDFS-14336.001.patch
>
>
> Fix checkstyle in NameNodeMXBean.java and make it more uniform.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-935) Avoid creating an already created container on a datanode in case of disk removal followed by datanode restart

2019-03-04 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784143#comment-16784143
 ] 

Shashikant Banerjee commented on HDDS-935:
--

Thanks [~jnp] and [~arpitagarwal], for the review. Patch v7 fixes related test 
failures , findbug and checkstyle issues

> Avoid creating an already created container on a datanode in case of disk 
> removal followed by datanode restart
> --
>
> Key: HDDS-935
> URL: https://issues.apache.org/jira/browse/HDDS-935
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Rakesh R
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-935.000.patch, HDDS-935.001.patch, 
> HDDS-935.002.patch, HDDS-935.003.patch, HDDS-935.004.patch, 
> HDDS-935.005.patch, HDDS-935.006.patch, HDDS-935.007.patch
>
>
> Currently, a container gets created when a writeChunk request comes to 
> HddsDispatcher and if the container does not exist already. In case a disk on 
> which a container exists gets removed and datanode restarts and now, if a 
> writeChunkRequest comes , it might end up creating the same container again 
> with an updated BCSID as it won't detect the disk is removed. This won't be 
> detected by SCM as well as it will have the latest BCSID. This Jira aims to 
> address this issue.
> The proposed fix would be to persist the all the containerIds existing in the 
> containerSet when a ratis snapshot is taken in the snapshot file. If the disk 
> is removed and dn gets restarted, the container set will be rebuild after 
> scanning all the available disks and the the container list stored in the 
> snapshot file will give all the containers created in the datanode. The diff 
> between these two will give the exact list of containers which were created 
> but were not detected after the restart. Any writeChunk request now should 
> validate the container Id from the list of missing containers. Also, we need 
> to ensure container creation does not happen as part of applyTransaction of 
> writeChunk request in Ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-935) Avoid creating an already created container on a datanode in case of disk removal followed by datanode restart

2019-03-04 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-935:
-
Attachment: HDDS-935.007.patch

> Avoid creating an already created container on a datanode in case of disk 
> removal followed by datanode restart
> --
>
> Key: HDDS-935
> URL: https://issues.apache.org/jira/browse/HDDS-935
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Rakesh R
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-935.000.patch, HDDS-935.001.patch, 
> HDDS-935.002.patch, HDDS-935.003.patch, HDDS-935.004.patch, 
> HDDS-935.005.patch, HDDS-935.006.patch, HDDS-935.007.patch
>
>
> Currently, a container gets created when a writeChunk request comes to 
> HddsDispatcher and if the container does not exist already. In case a disk on 
> which a container exists gets removed and datanode restarts and now, if a 
> writeChunkRequest comes , it might end up creating the same container again 
> with an updated BCSID as it won't detect the disk is removed. This won't be 
> detected by SCM as well as it will have the latest BCSID. This Jira aims to 
> address this issue.
> The proposed fix would be to persist the all the containerIds existing in the 
> containerSet when a ratis snapshot is taken in the snapshot file. If the disk 
> is removed and dn gets restarted, the container set will be rebuild after 
> scanning all the available disks and the the container list stored in the 
> snapshot file will give all the containers created in the datanode. The diff 
> between these two will give the exact list of containers which were created 
> but were not detected after the restart. Any writeChunk request now should 
> validate the container Id from the list of missing containers. Also, we need 
> to ensure container creation does not happen as part of applyTransaction of 
> writeChunk request in Ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14205) Backport HDFS-6440 to branch-2

2019-03-04 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784124#comment-16784124
 ] 

Chao Sun commented on HDFS-14205:
-

Hmm... all the failed tests are passing locally on my laptop.. [~vagarychen]: 
if possible, could you verify the patch on you side and see if the tests are 
passing?

> Backport HDFS-6440 to branch-2
> --
>
> Key: HDFS-14205
> URL: https://issues.apache.org/jira/browse/HDFS-14205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14205-branch-2.001.patch, 
> HDFS-14205-branch-2.002.patch, HDFS-14205-branch-2.003.patch, 
> HDFS-14205-branch-2.004.patch, HDFS-14205-branch-2.005.patch, 
> HDFS-14205-branch-2.006.patch, HDFS-14205-branch-2.007.patch
>
>
> Currently support for more than 2 NameNodes (HDFS-6440) is only in branch-3. 
> This JIRA aims to backport it to branch-2, as this is required by HDFS-12943 
> (consistent read from standby) backport to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-699) Detect Ozone Network topology

2019-03-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784122#comment-16784122
 ] 

Hadoop QA commented on HDDS-699:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 18 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
13s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 19m  
1s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
31s{color} | {color:red} dist in trunk failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} dist in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 17m 
55s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 55s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 32s{color} | {color:orange} root: The patch generated 3 new + 0 unchanged - 
0 fixed = 3 total (was 0) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
32s{color} | {color:red} dist in the patch failed. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
29s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 17 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
21s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
14s{color} | {color:green} common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 30s{color} 
| {color:red} dist in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce 

[jira] [Commented] (HDFS-14111) hdfsOpenFile on HDFS causes unnecessary IO from file offset 0

2019-03-04 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784091#comment-16784091
 ] 

Wei-Chiu Chuang commented on HDFS-14111:


+1 makes sense to me. Will wait for awhile for any one else have a chance to 
review.

> hdfsOpenFile on HDFS causes unnecessary IO from file offset 0
> -
>
> Key: HDFS-14111
> URL: https://issues.apache.org/jira/browse/HDFS-14111
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, libhdfs
>Affects Versions: 3.2.0
>Reporter: Todd Lipcon
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HDFS-14111.001.patch, HDFS-14111.002.patch, 
> HDFS-14111.003.patch
>
>
> hdfsOpenFile() calls readDirect() with a 0-length argument in order to check 
> whether the underlying stream supports bytebuffer reads. With DFSInputStream, 
> the read(0) isn't short circuited, and results in the DFSClient opening a 
> block reader. In the case of a remote block, the block reader will actually 
> issue a read of the whole block, causing the datanode to perform unnecessary 
> IO and network transfers in order to fill up the client's TCP buffers. This 
> causes performance degradation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14334) RBF: Use human readable format for long numbers in the Router UI

2019-03-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784090#comment-16784090
 ] 

Íñigo Goiri commented on HDFS-14334:


Thanks [~ayushtkn] for the comments.
This is just the web ui, not sure one would rely on it for precise numbers. 
For precision one can still directly check the JMX values.

> RBF: Use human readable format for long numbers in the Router UI
> 
>
> Key: HDFS-14334
> URL: https://issues.apache.org/jira/browse/HDFS-14334
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14334-HDFS-13891.000.patch, 
> HDFS-14334-HDFS-13891.001.patch, block-files-numbers.png
>
>
> Currently, for the number of files, we show the raw number. When it starts to 
> get into millions, it is hard to read. We should use a human readable version 
> similar to what we do with PB, GB, MB,...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1171) Add benchmark for OM and OM client in Genesis

2019-03-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784059#comment-16784059
 ] 

Hadoop QA commented on HDDS-1171:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-ozone/tools: The patch generated 15 new + 
0 unchanged - 0 fixed = 15 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m  2s{color} 
| {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1171 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961100/HDDS-1171.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4ae470e9fa5b 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9fcd89a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2441/artifact/out/diff-checkstyle-hadoop-ozone_tools.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2441/artifact/out/patch-unit-hadoop-ozone_tools.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2441/testReport/ |
| Max. process+thread count | 2364 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/tools U: 

[jira] [Updated] (HDDS-699) Detect Ozone Network topology

2019-03-04 Thread Sammi Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen updated HDDS-699:

Attachment: HDDS-699.04.patch

> Detect Ozone Network topology
> -
>
> Key: HDDS-699
> URL: https://issues.apache.org/jira/browse/HDDS-699
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Attachments: HDDS-699.00.patch, HDDS-699.01.patch, HDDS-699.02.patch, 
> HDDS-699.03.patch, HDDS-699.04.patch
>
>
> Traditionally this has been implemented in Hadoop via script or customizable 
> java class. One thing we want to add here is the flexible multi-level support 
> instead of fixed levels like DC/Rack/NG/Node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-807) Period should be an invalid character in bucket names

2019-03-04 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784043#comment-16784043
 ] 

Siddharth Wagle edited comment on HDDS-807 at 3/5/19 4:22 AM:
--

[~bharatviswa] The mock tests do test out the changes that are made in the 
patch. IMHO, the test for 
{noformat}
org.apache.hadoop.ozone.client.OzoneClientFactory#getRpcClient(java.lang.String,
 java.lang.Integer, org.apache.hadoop.conf.Configuration)
{noformat} should not be in the scope of this Jira. I do see that there are no 
direct calls that instantiate the RpcClient with OM host and port. We should 
open another Jira to test out the RpcClient primed with user-provided host port 
as a separate Jira to improve test coverage. If you agree I'd be happy to file 
separate Jira for it.

The FITs actually are quite heavy-weight and I deliberately wanted to add the 
mock tests to make sure testing a small change like this does not need a 
cluster spin up.



was (Author: swagle):
[~bharatviswa] The mock tests do test out the changes that are made in the 
patch. IMHO, the test for 
{noformat}
org.apache.hadoop.ozone.client.OzoneClientFactory#getRpcClient(java.lang.String,
 java.lang.Integer, org.apache.hadoop.conf.Configuration)
{noformat} should not be in the scope of this Jira. I do see that there are no 
direct calls that instantiate the RpcClient with OM host and port. We should 
open another Jira to test out the RpcClient primed with user-provided host port 
as a separate Jira to improve test coverage. 


> Period should be an invalid character in bucket names
> -
>
> Key: HDDS-807
> URL: https://issues.apache.org/jira/browse/HDDS-807
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Siddharth Wagle
>Priority: Critical
>  Labels: newbie
> Attachments: HDDS-807.01.patch, HDDS-807.02.patch, HDDS-807.03.patch, 
> HDDS-807.04.patch
>
>
> ozonefs paths use the following syntax:
> - o3fs://bucket.volume/..
> The OM host and port are read from configuration. We cannot specify a target 
> filesystem with a fully qualified path. E.g. 
> _o3fs://bucket.volume.om-host.example.com:9862/. Hence we cannot hand a fully 
> qualified URL with OM hostname to a client without setting up config files 
> beforehand. This is inconvenient. It also means there is no way to perform a 
> distcp from one Ozone cluster to another.
> We need a way to support fully qualified paths with OM hostname and port 
> _bucket.volume.om-host.example.com_. If we allow periods in bucketnames, then 
> such fully qualified paths cannot be parsed unambiguously. However if er 
> disallow periods, then we can support all of the following paths 
> unambiguously.
>  # *o3fs://bucket.volume/key* - The authority has only two period-separated 
> components. These must be bucket and volume name respectively.
>  # *o3fs://bucket.volume.om-host.example.com/key* - The authority has more 
> than two components. The first two must be bucket and volume, the rest must 
> be the hostname.
>  # *o3fs://bucket.volume.om-host.example.com:5678/key* - Similar to #2, 
> except with a port number.
>  
> Open question is around HA support. I believe for HA we will have to 
> introduce the notion of a _nameservice_, similar to HDFS nameservice. This 
> will allow a fourth kind of Ozone URL:
>  - *o3fs://bucket.volume.ns1/key* - How do we distinguish this from #3 above? 
> One way could be to find if _ns1_ is known as an Ozone nameservice via 
> configuration. If so then treat it as the name of an HA service. Else treat 
> it as a hostname.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-807) Period should be an invalid character in bucket names

2019-03-04 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784043#comment-16784043
 ] 

Siddharth Wagle commented on HDDS-807:
--

[~bharatviswa] The mock tests do test out the changes that are made in the 
patch. IMHO, the test for 
{noformat}
org.apache.hadoop.ozone.client.OzoneClientFactory#getRpcClient(java.lang.String,
 java.lang.Integer, org.apache.hadoop.conf.Configuration)
{noformat} should not be in the scope of this Jira. I do see that there are no 
direct calls that instantiate the RpcClient with OM host and port. We should 
open another Jira to test out the RpcClient primed with user-provided host port 
as a separate Jira to improve test coverage. 


> Period should be an invalid character in bucket names
> -
>
> Key: HDDS-807
> URL: https://issues.apache.org/jira/browse/HDDS-807
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Siddharth Wagle
>Priority: Critical
>  Labels: newbie
> Attachments: HDDS-807.01.patch, HDDS-807.02.patch, HDDS-807.03.patch, 
> HDDS-807.04.patch
>
>
> ozonefs paths use the following syntax:
> - o3fs://bucket.volume/..
> The OM host and port are read from configuration. We cannot specify a target 
> filesystem with a fully qualified path. E.g. 
> _o3fs://bucket.volume.om-host.example.com:9862/. Hence we cannot hand a fully 
> qualified URL with OM hostname to a client without setting up config files 
> beforehand. This is inconvenient. It also means there is no way to perform a 
> distcp from one Ozone cluster to another.
> We need a way to support fully qualified paths with OM hostname and port 
> _bucket.volume.om-host.example.com_. If we allow periods in bucketnames, then 
> such fully qualified paths cannot be parsed unambiguously. However if er 
> disallow periods, then we can support all of the following paths 
> unambiguously.
>  # *o3fs://bucket.volume/key* - The authority has only two period-separated 
> components. These must be bucket and volume name respectively.
>  # *o3fs://bucket.volume.om-host.example.com/key* - The authority has more 
> than two components. The first two must be bucket and volume, the rest must 
> be the hostname.
>  # *o3fs://bucket.volume.om-host.example.com:5678/key* - Similar to #2, 
> except with a port number.
>  
> Open question is around HA support. I believe for HA we will have to 
> introduce the notion of a _nameservice_, similar to HDFS nameservice. This 
> will allow a fourth kind of Ozone URL:
>  - *o3fs://bucket.volume.ns1/key* - How do we distinguish this from #3 above? 
> One way could be to find if _ns1_ is known as an Ozone nameservice via 
> configuration. If so then treat it as the name of an HA service. Else treat 
> it as a hostname.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-699) Detect Ozone Network topology

2019-03-04 Thread Sammi Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784040#comment-16784040
 ] 

Sammi Chen edited comment on HDDS-699 at 3/5/19 4:14 AM:
-

Hi [~szetszwo], thanks for the time.  04.patch is uploaded to address the 
issues reported by checkstyle and findbugs. I'm still investigating the build 
compile failure issue caused by 
{quote}cp: cannot stat 
'/testptch/hadoop/hadoop-ozone/objectstore-service/target/hadoop-ozone-objectstore-service-0.4.0-SNAPSHOT-plugin.jar':
 No such file or directory{quote}
The issue doesn't happen locally.



was (Author: sammi):
Hi [~szetszwo], thanks for the time.  04.patch is uploaded to address the 
issues reported by checkstyle and findbugs. I'm still investigating the build 
compile failure issue caused by 
{quote}cp: cannot stat 
'/testptch/hadoop/hadoop-ozone/objectstore-service/target/hadoop-ozone-objectstore-service-0.4.0-SNAPSHOT-plugin.jar':
 No such file or directory{quote}
The issue doesn't happen locally


> Detect Ozone Network topology
> -
>
> Key: HDDS-699
> URL: https://issues.apache.org/jira/browse/HDDS-699
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Attachments: HDDS-699.00.patch, HDDS-699.01.patch, HDDS-699.02.patch, 
> HDDS-699.03.patch, HDDS-699.04.patch
>
>
> Traditionally this has been implemented in Hadoop via script or customizable 
> java class. One thing we want to add here is the flexible multi-level support 
> instead of fixed levels like DC/Rack/NG/Node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-699) Detect Ozone Network topology

2019-03-04 Thread Sammi Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784040#comment-16784040
 ] 

Sammi Chen commented on HDDS-699:
-

Hi [~szetszwo], thanks for the time.  04.patch is uploaded to address the 
issues reported by checkstyle and findbugs. I'm still investigating the build 
compile failure issue caused by 
{quote}cp: cannot stat 
'/testptch/hadoop/hadoop-ozone/objectstore-service/target/hadoop-ozone-objectstore-service-0.4.0-SNAPSHOT-plugin.jar':
 No such file or directory{quote}
The issue doesn't happen locally


> Detect Ozone Network topology
> -
>
> Key: HDDS-699
> URL: https://issues.apache.org/jira/browse/HDDS-699
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Attachments: HDDS-699.00.patch, HDDS-699.01.patch, HDDS-699.02.patch, 
> HDDS-699.03.patch, HDDS-699.04.patch
>
>
> Traditionally this has been implemented in Hadoop via script or customizable 
> java class. One thing we want to add here is the flexible multi-level support 
> instead of fixed levels like DC/Rack/NG/Node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-699) Detect Ozone Network topology

2019-03-04 Thread Sammi Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen updated HDDS-699:

Attachment: HDDS-699.04.patch

> Detect Ozone Network topology
> -
>
> Key: HDDS-699
> URL: https://issues.apache.org/jira/browse/HDDS-699
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Attachments: HDDS-699.00.patch, HDDS-699.01.patch, HDDS-699.02.patch, 
> HDDS-699.03.patch, HDDS-699.04.patch
>
>
> Traditionally this has been implemented in Hadoop via script or customizable 
> java class. One thing we want to add here is the flexible multi-level support 
> instead of fixed levels like DC/Rack/NG/Node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-699) Detect Ozone Network topology

2019-03-04 Thread Sammi Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen updated HDDS-699:

Attachment: (was: HDDS-699.04.patch)

> Detect Ozone Network topology
> -
>
> Key: HDDS-699
> URL: https://issues.apache.org/jira/browse/HDDS-699
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Attachments: HDDS-699.00.patch, HDDS-699.01.patch, HDDS-699.02.patch, 
> HDDS-699.03.patch
>
>
> Traditionally this has been implemented in Hadoop via script or customizable 
> java class. One thing we want to add here is the flexible multi-level support 
> instead of fixed levels like DC/Rack/NG/Node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1171) Add benchmark for OM and OM client in Genesis

2019-03-04 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784024#comment-16784024
 ] 

Lokesh Jain commented on HDDS-1171:
---

[~anu] Thanks for reviewing the patch! I have uploaded rebased v3 patch.

> Add benchmark for OM and OM client in Genesis
> -
>
> Key: HDDS-1171
> URL: https://issues.apache.org/jira/browse/HDDS-1171
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-1171.001.patch, HDDS-1171.002.patch, 
> HDDS-1171.003.patch
>
>
> This Jira aims to add benchmark for OM and OM client in Genesis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1171) Add benchmark for OM and OM client in Genesis

2019-03-04 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-1171:
--
Attachment: HDDS-1171.003.patch

> Add benchmark for OM and OM client in Genesis
> -
>
> Key: HDDS-1171
> URL: https://issues.apache.org/jira/browse/HDDS-1171
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-1171.001.patch, HDDS-1171.002.patch, 
> HDDS-1171.003.patch
>
>
> This Jira aims to add benchmark for OM and OM client in Genesis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14317) Standby does not trigger edit log rolling when in-progress edit log tailing is enabled

2019-03-04 Thread star (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784017#comment-16784017
 ] 

star commented on HDFS-14317:
-

[~ekanth], nice patch.

I am not very clear about 'With in-progress edit log tailing enabled, 
tooLongSinceLastLoad() will almost never return true resulting in edit logs not 
rolled '. Can you describe it deeply? 

> Standby does not trigger edit log rolling when in-progress edit log tailing 
> is enabled
> --
>
> Key: HDFS-14317
> URL: https://issues.apache.org/jira/browse/HDFS-14317
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Ekanth Sethuramalingam
>Assignee: Ekanth Sethuramalingam
>Priority: Critical
> Attachments: HDFS-14317.001.patch, HDFS-14317.002.patch, 
> HDFS-14317.003.patch, HDFS-14317.004.patch
>
>
> The standby uses the following method to check if it is time to trigger edit 
> log rolling on active.
> {code}
>   /**
>* @return true if the configured log roll period has elapsed.
>*/
>   private boolean tooLongSinceLastLoad() {
> return logRollPeriodMs >= 0 && 
>   (monotonicNow() - lastLoadTimeMs) > logRollPeriodMs ;
>   }
> {code}
> In doTailEdits(), lastLoadTimeMs is updated when standby is able to 
> successfully tail any edits
> {code}
>   if (editsLoaded > 0) {
> lastLoadTimeMs = monotonicNow();
>   }
> {code}
> The default configuration for {{dfs.ha.log-roll.period}} is 120 seconds and 
> {{dfs.ha.tail-edits.period}} is 60 seconds. With in-progress edit log tailing 
> enabled, tooLongSinceLastLoad() will almost never return true resulting in 
> edit logs not rolled for a long time until this configuration 
> {{dfs.namenode.edit.log.autoroll.multiplier.threshold}} takes effect.
> [In our deployment, this resulted in in-progress edit logs getting deleted. 
> The sequence of events is that standby was able to checkpoint twice while the 
> in-progress edit log was growing on active. When the 
> NNStorageRetentionManager decided to cleanup old checkpoints and edit logs, 
> it cleaned up the in-progress edit log from active and QJM (as the txnid on 
> in-progress edit log was older than the 2 most recent checkpoints) resulting 
> in irrecoverably losing a few minutes worth of metadata].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-919) Enable prometheus endpoints for Ozone datanodes

2019-03-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-919?focusedWorklogId=207579=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-207579
 ]

ASF GitHub Bot logged work on HDDS-919:
---

Author: ASF GitHub Bot
Created on: 05/Mar/19 02:58
Start Date: 05/Mar/19 02:58
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #502: HDDS-919. 
Enable prometheus endpoints for Ozone datanodes
URL: https://github.com/apache/hadoop/pull/502#issuecomment-469516436
 
 
   +1 LGTM.
   I don't think test failures are related to this patch.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 207579)
Time Spent: 4h 40m  (was: 4.5h)

> Enable prometheus endpoints for Ozone datanodes
> ---
>
> Key: HDDS-919
> URL: https://issues.apache.org/jira/browse/HDDS-919
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> HDDS-846 provides a new metric endpoint which publishes the available Hadoop 
> metrics in prometheus friendly format with a new servlet.
> Unfortunately it's enabled only on the scm/om side. It would be great to 
> enable it in the Ozone/HDDS datanodes on the web server of the HDDS Rest 
> endpoint. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-807) Period should be an invalid character in bucket names

2019-03-04 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784011#comment-16784011
 ] 

Bharat Viswanadham edited comment on HDDS-807 at 3/5/19 3:03 AM:
-

Hi [~swagle]

Can we add functional unit tests, instead of mock tests?

 


was (Author: bharatviswa):
Hi [~swagle]

Can we add functional unit tests?

> Period should be an invalid character in bucket names
> -
>
> Key: HDDS-807
> URL: https://issues.apache.org/jira/browse/HDDS-807
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Siddharth Wagle
>Priority: Critical
>  Labels: newbie
> Attachments: HDDS-807.01.patch, HDDS-807.02.patch, HDDS-807.03.patch, 
> HDDS-807.04.patch
>
>
> ozonefs paths use the following syntax:
> - o3fs://bucket.volume/..
> The OM host and port are read from configuration. We cannot specify a target 
> filesystem with a fully qualified path. E.g. 
> _o3fs://bucket.volume.om-host.example.com:9862/. Hence we cannot hand a fully 
> qualified URL with OM hostname to a client without setting up config files 
> beforehand. This is inconvenient. It also means there is no way to perform a 
> distcp from one Ozone cluster to another.
> We need a way to support fully qualified paths with OM hostname and port 
> _bucket.volume.om-host.example.com_. If we allow periods in bucketnames, then 
> such fully qualified paths cannot be parsed unambiguously. However if er 
> disallow periods, then we can support all of the following paths 
> unambiguously.
>  # *o3fs://bucket.volume/key* - The authority has only two period-separated 
> components. These must be bucket and volume name respectively.
>  # *o3fs://bucket.volume.om-host.example.com/key* - The authority has more 
> than two components. The first two must be bucket and volume, the rest must 
> be the hostname.
>  # *o3fs://bucket.volume.om-host.example.com:5678/key* - Similar to #2, 
> except with a port number.
>  
> Open question is around HA support. I believe for HA we will have to 
> introduce the notion of a _nameservice_, similar to HDFS nameservice. This 
> will allow a fourth kind of Ozone URL:
>  - *o3fs://bucket.volume.ns1/key* - How do we distinguish this from #3 above? 
> One way could be to find if _ns1_ is known as an Ozone nameservice via 
> configuration. If so then treat it as the name of an HA service. Else treat 
> it as a hostname.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-807) Period should be an invalid character in bucket names

2019-03-04 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784011#comment-16784011
 ] 

Bharat Viswanadham edited comment on HDDS-807 at 3/5/19 3:02 AM:
-

Hi [~swagle]

Can we add functional unit tests?


was (Author: bharatviswa):
Hi [~swagle] 

Can we add functional unit tests, instead of Mock tests?

> Period should be an invalid character in bucket names
> -
>
> Key: HDDS-807
> URL: https://issues.apache.org/jira/browse/HDDS-807
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Siddharth Wagle
>Priority: Critical
>  Labels: newbie
> Attachments: HDDS-807.01.patch, HDDS-807.02.patch, HDDS-807.03.patch, 
> HDDS-807.04.patch
>
>
> ozonefs paths use the following syntax:
> - o3fs://bucket.volume/..
> The OM host and port are read from configuration. We cannot specify a target 
> filesystem with a fully qualified path. E.g. 
> _o3fs://bucket.volume.om-host.example.com:9862/. Hence we cannot hand a fully 
> qualified URL with OM hostname to a client without setting up config files 
> beforehand. This is inconvenient. It also means there is no way to perform a 
> distcp from one Ozone cluster to another.
> We need a way to support fully qualified paths with OM hostname and port 
> _bucket.volume.om-host.example.com_. If we allow periods in bucketnames, then 
> such fully qualified paths cannot be parsed unambiguously. However if er 
> disallow periods, then we can support all of the following paths 
> unambiguously.
>  # *o3fs://bucket.volume/key* - The authority has only two period-separated 
> components. These must be bucket and volume name respectively.
>  # *o3fs://bucket.volume.om-host.example.com/key* - The authority has more 
> than two components. The first two must be bucket and volume, the rest must 
> be the hostname.
>  # *o3fs://bucket.volume.om-host.example.com:5678/key* - Similar to #2, 
> except with a port number.
>  
> Open question is around HA support. I believe for HA we will have to 
> introduce the notion of a _nameservice_, similar to HDFS nameservice. This 
> will allow a fourth kind of Ozone URL:
>  - *o3fs://bucket.volume.ns1/key* - How do we distinguish this from #3 above? 
> One way could be to find if _ns1_ is known as an Ozone nameservice via 
> configuration. If so then treat it as the name of an HA service. Else treat 
> it as a hostname.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-807) Period should be an invalid character in bucket names

2019-03-04 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784011#comment-16784011
 ] 

Bharat Viswanadham commented on HDDS-807:
-

Hi [~swagle] 

Can we add functional unit tests, instead of Mock tests?

> Period should be an invalid character in bucket names
> -
>
> Key: HDDS-807
> URL: https://issues.apache.org/jira/browse/HDDS-807
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Siddharth Wagle
>Priority: Critical
>  Labels: newbie
> Attachments: HDDS-807.01.patch, HDDS-807.02.patch, HDDS-807.03.patch, 
> HDDS-807.04.patch
>
>
> ozonefs paths use the following syntax:
> - o3fs://bucket.volume/..
> The OM host and port are read from configuration. We cannot specify a target 
> filesystem with a fully qualified path. E.g. 
> _o3fs://bucket.volume.om-host.example.com:9862/. Hence we cannot hand a fully 
> qualified URL with OM hostname to a client without setting up config files 
> beforehand. This is inconvenient. It also means there is no way to perform a 
> distcp from one Ozone cluster to another.
> We need a way to support fully qualified paths with OM hostname and port 
> _bucket.volume.om-host.example.com_. If we allow periods in bucketnames, then 
> such fully qualified paths cannot be parsed unambiguously. However if er 
> disallow periods, then we can support all of the following paths 
> unambiguously.
>  # *o3fs://bucket.volume/key* - The authority has only two period-separated 
> components. These must be bucket and volume name respectively.
>  # *o3fs://bucket.volume.om-host.example.com/key* - The authority has more 
> than two components. The first two must be bucket and volume, the rest must 
> be the hostname.
>  # *o3fs://bucket.volume.om-host.example.com:5678/key* - Similar to #2, 
> except with a port number.
>  
> Open question is around HA support. I believe for HA we will have to 
> introduce the notion of a _nameservice_, similar to HDFS nameservice. This 
> will allow a fourth kind of Ozone URL:
>  - *o3fs://bucket.volume.ns1/key* - How do we distinguish this from #3 above? 
> One way could be to find if _ns1_ is known as an Ozone nameservice via 
> configuration. If so then treat it as the name of an HA service. Else treat 
> it as a hostname.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14336) Fix checkstyle for NameNodeMXBean.java

2019-03-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784008#comment-16784008
 ] 

Hadoop QA commented on HDFS-14336:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 0 unchanged - 51 fixed = 0 total (was 51) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m  1s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14336 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961076/HDFS-14336.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b52a3c878da6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9fcd89a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26406/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26406/testReport/ |
| Max. process+thread count | 3934 (vs. ulimit of 1) |
| modules | C: 

[jira] [Work logged] (HDDS-1216) Change name of ozoneManager service in docker compose files to om

2019-03-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1216?focusedWorklogId=207575=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-207575
 ]

ASF GitHub Bot logged work on HDDS-1216:


Author: ASF GitHub Bot
Created on: 05/Mar/19 02:50
Start Date: 05/Mar/19 02:50
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #553: 
HDDS-1216. Change name of ozoneManager service in docker compose file…
URL: https://github.com/apache/hadoop/pull/553#discussion_r262329390
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozone/docker-compose.yaml
 ##
 @@ -26,15 +26,15 @@ services:
   command: ["/opt/hadoop/bin/ozone","datanode"]
   env_file:
 - ./docker-config
-   ozoneManager:
+   om:
   image: apache/hadoop-runner
   privileged: true #required by the profiler
   volumes:
  - ../..:/opt/hadoop
   ports:
  - 9874:9874
   environment:
- ENSURE_OM_INITIALIZED: /data/metadata/ozoneManager/current/VERSION
+ ENSURE_OM_INITIALIZED: /data/metadata/om/current/VERSION
 
 Review comment:
   Here we have changed to om, will be this be same as service name when path 
is created?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 207575)
Time Spent: 1h 10m  (was: 1h)

> Change name of ozoneManager service in docker compose files to om
> -
>
> Key: HDDS-1216
> URL: https://issues.apache.org/jira/browse/HDDS-1216
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Change name of ozoneManager service in docker compose files to om for 
> consistency. (secure ozone compose file will use "om"). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-594) SCM CA: DN sends CSR and uses certificate issued by SCM

2019-03-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-594?focusedWorklogId=207572=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-207572
 ]

ASF GitHub Bot logged work on HDDS-594:
---

Author: ASF GitHub Bot
Created on: 05/Mar/19 02:42
Start Date: 05/Mar/19 02:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #547: HDDS-594. SCM CA: 
DN sends CSR and uses certificate issued by SCM.
URL: https://github.com/apache/hadoop/pull/547#issuecomment-469513188
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for branch |
   | +1 | mvninstall | 980 | trunk passed |
   | +1 | compile | 72 | trunk passed |
   | +1 | checkstyle | 29 | trunk passed |
   | +1 | mvnsite | 75 | trunk passed |
   | +1 | shadedclient | 716 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 115 | trunk passed |
   | +1 | javadoc | 58 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 10 | Maven dependency ordering for patch |
   | +1 | mvninstall | 76 | the patch passed |
   | +1 | compile | 67 | the patch passed |
   | +1 | javac | 67 | the patch passed |
   | -0 | checkstyle | 22 | hadoop-hdds: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 61 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 741 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 128 | the patch passed |
   | +1 | javadoc | 62 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 62 | common in the patch failed. |
   | -1 | unit | 71 | container-service in the patch failed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 3464 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificates.TestCertificateSignRequest |
   |   | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   |   | hadoop.ozone.container.common.TestDatanodeStateMachine |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/547 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux a6fe03809c56 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9fcd89a |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/5/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/5/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/5/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/5/testReport/ |
   | Max. process+thread count | 423 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service U: 
hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/5/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 207572)
Time Spent: 0.5h  (was: 20m)

> SCM CA: DN sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-594
> URL: https://issues.apache.org/jira/browse/HDDS-594
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: 

[jira] [Work logged] (HDDS-594) SCM CA: DN sends CSR and uses certificate issued by SCM

2019-03-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-594?focusedWorklogId=207574=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-207574
 ]

ASF GitHub Bot logged work on HDDS-594:
---

Author: ASF GitHub Bot
Created on: 05/Mar/19 02:44
Start Date: 05/Mar/19 02:44
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #547: HDDS-594. SCM CA: 
DN sends CSR and uses certificate issued by SCM.
URL: https://github.com/apache/hadoop/pull/547#issuecomment-469513593
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1066 | trunk passed |
   | +1 | compile | 88 | trunk passed |
   | +1 | checkstyle | 31 | trunk passed |
   | +1 | mvnsite | 83 | trunk passed |
   | +1 | shadedclient | 827 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 144 | trunk passed |
   | +1 | javadoc | 97 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 10 | Maven dependency ordering for patch |
   | +1 | mvninstall | 80 | the patch passed |
   | +1 | compile | 72 | the patch passed |
   | +1 | javac | 72 | the patch passed |
   | +1 | checkstyle | 25 | the patch passed |
   | +1 | mvnsite | 77 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 829 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 149 | the patch passed |
   | +1 | javadoc | 67 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 86 | common in the patch failed. |
   | +1 | unit | 68 | container-service in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3922 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/547 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 5c5da289cb7d 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9fcd89a |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/4/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/4/testReport/ |
   | Max. process+thread count | 313 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service U: 
hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/4/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 207574)
Time Spent: 40m  (was: 0.5h)

> SCM CA: DN sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-594
> URL: https://issues.apache.org/jira/browse/HDDS-594
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-594.00.patch, HDDS-594.01.patch, HDDS-594.02.patch, 
> HDDS-594.03.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, 

[jira] [Work logged] (HDDS-1216) Change name of ozoneManager service in docker compose files to om

2019-03-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1216?focusedWorklogId=207570=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-207570
 ]

ASF GitHub Bot logged work on HDDS-1216:


Author: ASF GitHub Bot
Created on: 05/Mar/19 02:39
Start Date: 05/Mar/19 02:39
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #553: 
HDDS-1216. Change name of ozoneManager service in docker compose file…
URL: https://github.com/apache/hadoop/pull/553#discussion_r262327729
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/security/ozone-secure.robot
 ##
 @@ -107,5 +140,16 @@ Run ozoneFS tests
 Execute   ls -l GET.txt
 ${rc}  ${result} =  Run And Return Rc And Outputozone fs -ls 
o3fs://abcde.pqrs/
 Should Be Equal As Integers ${rc}1
-Should contain${result} VOLUME_NOT_FOUND
+Should contain${result} not found
+
+
+Secure S3 test Failure
+Run Keyword Install aws cli
+${rc}  ${result} =  Run And Return Rc And Output  aws s3api --endpoint-url 
${ENDPOINT_URL} create-bucket --bucket bucket-test123
+Should Be True ${rc} > 0
+
+Secure S3 test Success
+Run Keyword Setup credentials
+${output} = Execute  aws s3api --endpoint-url 
${ENDPOINT_URL} create-bucket --bucket bucket-test123
+
 
 Review comment:
   Can we remove S3 related changes from this?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 207570)
Time Spent: 1h  (was: 50m)

> Change name of ozoneManager service in docker compose files to om
> -
>
> Key: HDDS-1216
> URL: https://issues.apache.org/jira/browse/HDDS-1216
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Change name of ozoneManager service in docker compose files to om for 
> consistency. (secure ozone compose file will use "om"). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1216) Change name of ozoneManager service in docker compose files to om

2019-03-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1216?focusedWorklogId=207569=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-207569
 ]

ASF GitHub Bot logged work on HDDS-1216:


Author: ASF GitHub Bot
Created on: 05/Mar/19 02:39
Start Date: 05/Mar/19 02:39
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #553: 
HDDS-1216. Change name of ozoneManager service in docker compose file…
URL: https://github.com/apache/hadoop/pull/553#discussion_r262327659
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/security/ozone-secure.robot
 ##
 @@ -16,14 +16,44 @@
 *** Settings ***
 Documentation   Smoke test to start cluster with docker-compose 
environments.
 Library OperatingSystem
+Library String
 Resource../commonlib.robot
 
+*** Variables ***
+${ENDPOINT_URL}   http://s3g:9878
+
+*** Keywords ***
+Install aws cli s3 centos
+Executesudo yum install -y awscli
+Executesudo yum install -y krb5-user
+Install aws cli s3 debian
+Executesudo apt-get install -y awscli
+Executesudo apt-get install -y krb5-user
+
+Install aws cli
+${rc}  ${output} = Run And Return Rc And 
Output   which apt-get
+Run Keyword if '${rc}' == '0'  Install aws cli s3 debian
+${rc}  ${output} = Run And Return Rc And 
Output   yum --help
+Run Keyword if '${rc}' == '0'  Install aws cli s3 centos
+
+Setup credentials
+${hostname}=Executehostname
+Execute kinit -k testuser/${hostname}@EXAMPLE.COM -t 
/etc/security/keytabs/testuser.keytab
+${result} = Executeozone sh s3 getsecret
+${accessKey} =  Get Regexp Matches ${result} 
(?<=awsAccessKey=).*
+${secret} = Get Regexp Matches${result} 
(?<=awsSecret=).*
 
 Review comment:
   Can we remove this change from here? As this Jira purpose is not for this.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 207569)
Time Spent: 1h  (was: 50m)

> Change name of ozoneManager service in docker compose files to om
> -
>
> Key: HDDS-1216
> URL: https://issues.apache.org/jira/browse/HDDS-1216
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Change name of ozoneManager service in docker compose files to om for 
> consistency. (secure ozone compose file will use "om"). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14334) RBF: Use human readable format for long numbers in the Router UI

2019-03-04 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783991#comment-16783991
 ] 

Ayush Saxena commented on HDFS-14334:
-

Thanx [~elgoiri] for the patch.

The idea seems fair enough.

Just on a thought will this compromise precision of the values, Since now we 
are giving up the round off values rather than the exact value?

> RBF: Use human readable format for long numbers in the Router UI
> 
>
> Key: HDFS-14334
> URL: https://issues.apache.org/jira/browse/HDFS-14334
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14334-HDFS-13891.000.patch, 
> HDFS-14334-HDFS-13891.001.patch, block-files-numbers.png
>
>
> Currently, for the number of files, we show the raw number. When it starts to 
> get into millions, it is hard to read. We should use a human readable version 
> similar to what we do with PB, GB, MB,...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14205) Backport HDFS-6440 to branch-2

2019-03-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783959#comment-16783959
 ] 

Hadoop QA commented on HDFS-14205:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 31 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
42s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
21s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
32s{color} | {color:green} branch-2 passed with JDK v1.8.0_191 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
17s{color} | {color:green} branch-2 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
56s{color} | {color:red} hadoop-common-project/hadoop-common in branch-2 has 1 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
8s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in branch-2 has 1 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
12s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
28s{color} | {color:green} branch-2 passed with JDK v1.8.0_191 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
34s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
18s{color} | {color:green} the patch passed with JDK v1.8.0_191 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 32s{color} | {color:orange} root: The patch generated 226 new + 1173 
unchanged - 53 fixed = 1399 total (was 1226) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed with JDK v1.8.0_191 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m  1s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
41s{color} | {color:green} bkjournal in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not 

[jira] [Updated] (HDFS-14314) fullBlockReportLeaseId should be reset after registering to NN

2019-03-04 Thread star (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

star updated HDFS-14314:

Attachment: HDFS-14314-branch-2.8.001.patch

> fullBlockReportLeaseId should be reset after registering to NN
> --
>
> Key: HDFS-14314
> URL: https://issues.apache.org/jira/browse/HDFS-14314
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.8.4
> Environment:  
>  
>  
>Reporter: star
>Assignee: star
>Priority: Critical
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-14314-branch-2.8.001.patch, 
> HDFS-14314-trunk.001.patch, HDFS-14314-trunk.001.patch, 
> HDFS-14314-trunk.002.patch, HDFS-14314-trunk.003.patch, 
> HDFS-14314-trunk.004.patch, HDFS-14314-trunk.005.patch, 
> HDFS-14314-trunk.006.patch, HDFS-14314.0.patch, HDFS-14314.2.patch, 
> HDFS-14314.branch-2.001.patch, HDFS-14314.patch
>
>
>   since HDFS-7923 ,to rate-limit DN block report, DN will ask for a full 
> block lease id from active NN before sending full block to NN. Then DN will 
> send full block report together with lease id. If the lease id is invalid, NN 
> will reject the full block report and log "not in the pending set".
>   In a case when DN is doing full block reporting while NN is restarted. 
> It happens that DN will later send a full block report with lease id 
> ,acquired from previous NN instance, which is invalid to the new NN instance. 
> Though DN recognized the new NN instance by heartbeat and reregister itself, 
> it did not reset the lease id from previous instance.
>   The issuse may cause DNs to temporarily go dead, making it unsafe to 
> restart NN especially in hadoop cluster which has large amount of DNs. 
> HDFS-12914 reported the issue  without any clues why it occurred and remain 
> unsolved.
>    To make it clear, look at code below. We take it from method 
> offerService of class BPServiceActor. We eliminate some code to focus on 
> current issue. fullBlockReportLeaseId is a local variable to hold lease id 
> from NN. Exceptions will occur at blockReport call when NN restarting, which 
> will be caught by catch block in while loop. Thus fullBlockReportLeaseId will 
> not be set to 0. After NN restarted, DN will send full block report which 
> will be rejected by the new NN instance. DN will never send full block report 
> until the next full block report schedule, about an hour later.
>   Solution is simple, just reset fullBlockReportLeaseId to 0 after any 
> exception or after registering to NN. Thus it will ask for a valid 
> fullBlockReportLeaseId from new NN instance.
> {code:java}
> private void offerService() throws Exception {
>   long fullBlockReportLeaseId = 0;
>   //
>   // Now loop for a long time
>   //
>   while (shouldRun()) {
> try {
>   final long startTime = scheduler.monotonicNow();
>   //
>   // Every so often, send heartbeat or block-report
>   //
>   final boolean sendHeartbeat = scheduler.isHeartbeatDue(startTime);
>   HeartbeatResponse resp = null;
>   if (sendHeartbeat) {
>   
> boolean requestBlockReportLease = (fullBlockReportLeaseId == 0) &&
> scheduler.isBlockReportDue(startTime);
> scheduler.scheduleNextHeartbeat();
> if (!dn.areHeartbeatsDisabledForTests()) {
>   resp = sendHeartBeat(requestBlockReportLease);
>   assert resp != null;
>   if (resp.getFullBlockReportLeaseId() != 0) {
> if (fullBlockReportLeaseId != 0) {
>   LOG.warn(nnAddr + " sent back a full block report lease " +
>   "ID of 0x" +
>   Long.toHexString(resp.getFullBlockReportLeaseId()) +
>   ", but we already have a lease ID of 0x" +
>   Long.toHexString(fullBlockReportLeaseId) + ". " +
>   "Overwriting old lease ID.");
> }
> fullBlockReportLeaseId = resp.getFullBlockReportLeaseId();
>   }
>  
> }
>   }
>
>  
>   if ((fullBlockReportLeaseId != 0) || forceFullBr) {
> //Exception occurred here when NN restarting
> cmds = blockReport(fullBlockReportLeaseId);
> fullBlockReportLeaseId = 0;
>   }
>   
> } catch(RemoteException re) {
>   
>   } // while (shouldRun())
> } // offerService{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-594) SCM CA: DN sends CSR and uses certificate issued by SCM

2019-03-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-594?focusedWorklogId=207548=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-207548
 ]

ASF GitHub Bot logged work on HDDS-594:
---

Author: ASF GitHub Bot
Created on: 05/Mar/19 01:18
Start Date: 05/Mar/19 01:18
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #547: HDDS-594. SCM CA: 
DN sends CSR and uses certificate issued by SCM.
URL: https://github.com/apache/hadoop/pull/547#issuecomment-469495030
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 6 | https://github.com/apache/hadoop/pull/547 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/547 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 207548)
Time Spent: 20m  (was: 10m)

> SCM CA: DN sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-594
> URL: https://issues.apache.org/jira/browse/HDDS-594
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-594.00.patch, HDDS-594.01.patch, HDDS-594.02.patch, 
> HDDS-594.03.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-594) SCM CA: DN sends CSR and uses certificate issued by SCM

2019-03-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-594?focusedWorklogId=207547=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-207547
 ]

ASF GitHub Bot logged work on HDDS-594:
---

Author: ASF GitHub Bot
Created on: 05/Mar/19 01:17
Start Date: 05/Mar/19 01:17
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #547: HDDS-594. SCM CA: 
DN sends CSR and uses certificate issued by SCM.
URL: https://github.com/apache/hadoop/pull/547#issuecomment-469494762
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 7 | https://github.com/apache/hadoop/pull/547 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/547 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 207547)
Time Spent: 10m
Remaining Estimate: 0h

> SCM CA: DN sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-594
> URL: https://issues.apache.org/jira/browse/HDDS-594
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-594.00.patch, HDDS-594.01.patch, HDDS-594.02.patch, 
> HDDS-594.03.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-3246) pRead equivalent for direct read path

2019-03-04 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783945#comment-16783945
 ] 

Wei-Chiu Chuang commented on HDFS-3246:
---

I"m trying to understand the implementation within CryptoInputStream but it's 
not that straightforward and my math is rough.

That said, I think you were trying to imitate {{decrypt(long position, byte[] 
buffer, int offset, int length)}}, and I think I spot some math errors. The 
variable names are confusing, which probably explains part of the errors.

 
{code:java}
while (len < n) {
  buf.position(start + len);
  buf.limit(start + len + Math.min(n - len, inBuffer.remaining()));
  inBuffer.put(buf);
  // Do decryption
  try {
decrypt(decryptor, inBuffer, outBuffer, padding);
buf.position(start + len); --> not needed?
buf.limit(limit); --> not needed?
len += outBuffer.remaining(); --> len += Math.min(n - len, 
inBuffer.remaining())?
buf.put(outBuffer);
  } finally {
padding = afterDecryption(decryptor, inBuffer, "position + n", iv); --> 
position + len?
  }
}
{code}
 

> pRead equivalent for direct read path
> -
>
> Key: HDFS-3246
> URL: https://issues.apache.org/jira/browse/HDFS-3246
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, performance
>Affects Versions: 3.0.0-alpha1
>Reporter: Henry Robinson
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HDFS-3246.001.patch, HDFS-3246.002.patch, 
> HDFS-3246.003.patch, HDFS-3246.004.patch, HDFS-3246.005.patch, 
> HDFS-3246.006.patch
>
>
> There is no pread equivalent in ByteBufferReadable. We should consider adding 
> one. It would be relatively easy to implement for the distributed case 
> (certainly compared to HDFS-2834), since DFSInputStream does most of the 
> heavy lifting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14336) Fix checkstyle for NameNodeMXBean.java

2019-03-04 Thread Danny Becker (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783937#comment-16783937
 ] 

Danny Becker commented on HDFS-14336:
-

No testing was added to this patch because it only fixes checkstyle issues.

> Fix checkstyle for NameNodeMXBean.java
> --
>
> Key: HDFS-14336
> URL: https://issues.apache.org/jira/browse/HDFS-14336
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Danny Becker
>Assignee: Danny Becker
>Priority: Trivial
> Attachments: HDFS-14336.000.patch, HDFS-14336.001.patch
>
>
> Fix checkstyle in NameNodeMXBean.java and make it more uniform.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14314) fullBlockReportLeaseId should be reset after registering to NN

2019-03-04 Thread star (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783942#comment-16783942
 ] 

star commented on HDFS-14314:
-

[~jojochuang], append to your sammerization.

     2. Afterwards, DN reset its local lease id to 0 {color:#59afe1}right after 
reregistering to new NN instance.{color}

 

{color:#33}As to branch-2.8, should I submit a new patch named like 
HDFS-14314-branch-2.8.001.patch?{color}

> fullBlockReportLeaseId should be reset after registering to NN
> --
>
> Key: HDFS-14314
> URL: https://issues.apache.org/jira/browse/HDFS-14314
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.8.4
> Environment:  
>  
>  
>Reporter: star
>Assignee: star
>Priority: Critical
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-14314-trunk.001.patch, HDFS-14314-trunk.001.patch, 
> HDFS-14314-trunk.002.patch, HDFS-14314-trunk.003.patch, 
> HDFS-14314-trunk.004.patch, HDFS-14314-trunk.005.patch, 
> HDFS-14314-trunk.006.patch, HDFS-14314.0.patch, HDFS-14314.2.patch, 
> HDFS-14314.branch-2.001.patch, HDFS-14314.patch
>
>
>   since HDFS-7923 ,to rate-limit DN block report, DN will ask for a full 
> block lease id from active NN before sending full block to NN. Then DN will 
> send full block report together with lease id. If the lease id is invalid, NN 
> will reject the full block report and log "not in the pending set".
>   In a case when DN is doing full block reporting while NN is restarted. 
> It happens that DN will later send a full block report with lease id 
> ,acquired from previous NN instance, which is invalid to the new NN instance. 
> Though DN recognized the new NN instance by heartbeat and reregister itself, 
> it did not reset the lease id from previous instance.
>   The issuse may cause DNs to temporarily go dead, making it unsafe to 
> restart NN especially in hadoop cluster which has large amount of DNs. 
> HDFS-12914 reported the issue  without any clues why it occurred and remain 
> unsolved.
>    To make it clear, look at code below. We take it from method 
> offerService of class BPServiceActor. We eliminate some code to focus on 
> current issue. fullBlockReportLeaseId is a local variable to hold lease id 
> from NN. Exceptions will occur at blockReport call when NN restarting, which 
> will be caught by catch block in while loop. Thus fullBlockReportLeaseId will 
> not be set to 0. After NN restarted, DN will send full block report which 
> will be rejected by the new NN instance. DN will never send full block report 
> until the next full block report schedule, about an hour later.
>   Solution is simple, just reset fullBlockReportLeaseId to 0 after any 
> exception or after registering to NN. Thus it will ask for a valid 
> fullBlockReportLeaseId from new NN instance.
> {code:java}
> private void offerService() throws Exception {
>   long fullBlockReportLeaseId = 0;
>   //
>   // Now loop for a long time
>   //
>   while (shouldRun()) {
> try {
>   final long startTime = scheduler.monotonicNow();
>   //
>   // Every so often, send heartbeat or block-report
>   //
>   final boolean sendHeartbeat = scheduler.isHeartbeatDue(startTime);
>   HeartbeatResponse resp = null;
>   if (sendHeartbeat) {
>   
> boolean requestBlockReportLease = (fullBlockReportLeaseId == 0) &&
> scheduler.isBlockReportDue(startTime);
> scheduler.scheduleNextHeartbeat();
> if (!dn.areHeartbeatsDisabledForTests()) {
>   resp = sendHeartBeat(requestBlockReportLease);
>   assert resp != null;
>   if (resp.getFullBlockReportLeaseId() != 0) {
> if (fullBlockReportLeaseId != 0) {
>   LOG.warn(nnAddr + " sent back a full block report lease " +
>   "ID of 0x" +
>   Long.toHexString(resp.getFullBlockReportLeaseId()) +
>   ", but we already have a lease ID of 0x" +
>   Long.toHexString(fullBlockReportLeaseId) + ". " +
>   "Overwriting old lease ID.");
> }
> fullBlockReportLeaseId = resp.getFullBlockReportLeaseId();
>   }
>  
> }
>   }
>
>  
>   if ((fullBlockReportLeaseId != 0) || forceFullBr) {
> //Exception occurred here when NN restarting
> cmds = blockReport(fullBlockReportLeaseId);
> fullBlockReportLeaseId = 0;
>   }
>   
> } catch(RemoteException re) {
>   
>   } // while (shouldRun())
> } // offerService{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HDFS-14336) Fix checkstyle for NameNodeMXBean.java

2019-03-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783932#comment-16783932
 ] 

Hadoop QA commented on HDFS-14336:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 0 unchanged - 51 fixed = 1 total (was 51) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 8 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}111m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
|   | hadoop.hdfs.server.datanode.TestBlockReplacement |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14336 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961054/HDFS-14336.000.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 60f7e5ece303 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cb0fa0c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | 

[jira] [Updated] (HDFS-14336) Fix checkstyle for NameNodeMXBean.java

2019-03-04 Thread Danny Becker (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Becker updated HDFS-14336:

Attachment: HDFS-14336.001.patch

> Fix checkstyle for NameNodeMXBean.java
> --
>
> Key: HDFS-14336
> URL: https://issues.apache.org/jira/browse/HDFS-14336
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Danny Becker
>Assignee: Danny Becker
>Priority: Trivial
> Attachments: HDFS-14336.000.patch, HDFS-14336.001.patch
>
>
> Fix checkstyle in NameNodeMXBean.java and make it more uniform.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-807) Period should be an invalid character in bucket names

2019-03-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783911#comment-16783911
 ] 

Hadoop QA commented on HDDS-807:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
26s{color} | {color:green} ozonefs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-807 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961068/HDDS-807.04.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 874db6627388 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 90c37ac |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2440/testReport/ |
| Max. process+thread count | 2831 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/ozonefs U: hadoop-ozone/ozonefs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2440/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Period should be an invalid character in bucket names
> 

[jira] [Commented] (HDFS-12345) Scale testing HDFS NameNode with real metadata and workloads (Dynamometer)

2019-03-04 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783915#comment-16783915
 ] 

Siyao Meng commented on HDFS-12345:
---

Hi [~xkrogen], I changed the status of this jira from PATCH AVAILABLE to IN 
PROGRESS, but I find myself unable to upload the patch via "Submit Patch" 
button. So I uploaded the rev 002 patch directly. Now I can't change the jira 
status. Could you help change the status back to PATCH AVAILABLE so it could 
trigger jenkins? Thanks!

> Scale testing HDFS NameNode with real metadata and workloads (Dynamometer)
> --
>
> Key: HDFS-12345
> URL: https://issues.apache.org/jira/browse/HDFS-12345
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode, test
>Reporter: Zhe Zhang
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-12345.000.patch, HDFS-12345.001.patch, 
> HDFS-12345.002.patch
>
>
> Dynamometer has now been open sourced on our [GitHub 
> page|https://github.com/linkedin/dynamometer]. Read more at our [recent blog 
> post|https://engineering.linkedin.com/blog/2018/02/dynamometer--scale-testing-hdfs-on-minimal-hardware-with-maximum].
> To encourage getting the tool into the open for others to use as quickly as 
> possible, we went through our standard open sourcing process of releasing on 
> GitHub. However we are interested in the possibility of donating this to 
> Apache as part of Hadoop itself and would appreciate feedback on whether or 
> not this is something that would be supported by the community.
> Also of note, previous [discussions on the dev mail 
> lists|http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201707.mbox/%3c98fceffa-faff-4cf1-a14d-4faab6567...@gmail.com%3e]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14261) Kerberize JournalNodeSyncer unit test

2019-03-04 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783912#comment-16783912
 ] 

Siyao Meng commented on HDFS-14261:
---

[~ayushtkn] Yeah I also noted a lot of time outs due to Kerberized 
JournalNodeSync unit tests lately. Tests passed locally but need to look into 
why it fails so often under heavy load (jenkins). I asked [~jojochuang] to 
revert it at the moment. Thanks!

> Kerberize JournalNodeSyncer unit test
> -
>
> Key: HDFS-14261
> URL: https://issues.apache.org/jira/browse/HDFS-14261
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: journal-node, security, test
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14261.001.patch
>
>
> This jira is an addition to HDFS-14140. Making the unit tests in 
> TestJournalNodeSync run on a Kerberized cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1072) Implement RetryProxy and FailoverProxy for OM client

2019-03-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1072?focusedWorklogId=207525=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-207525
 ]

ASF GitHub Bot logged work on HDDS-1072:


Author: ASF GitHub Bot
Created on: 04/Mar/19 23:52
Start Date: 04/Mar/19 23:52
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #548: Revert 
"HDDS-1072. Implement RetryProxy and FailoverProxy for OM clie…
URL: https://github.com/apache/hadoop/pull/548#issuecomment-469475348
 
 
   This has been already done in trunk.
   
   commit b18c1c22ea238c4b783031402496164f0351b531
   Author: Hanisha Koneru 
   Date:   Fri Mar 1 20:05:12 2019 -0800
   
   Revert "HDDS-1072. Implement RetryProxy and FailoverProxy for OM client.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 207525)
Time Spent: 0.5h  (was: 20m)

> Implement RetryProxy and FailoverProxy for OM client
> 
>
> Key: HDDS-1072
> URL: https://issues.apache.org/jira/browse/HDDS-1072
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1072.001.patch, HDDS-1072.002.patch, 
> HDDS-1072.003.patch, HDDS-1072.004.patch, HDDS-1072.005.patch, 
> HDDS-1072.006.patch, HDDS-1072.007.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> RPC Client should implement a retry and failover proxy provider to failover 
> between OM Ratis clients. The failover should occur in two scenarios:
> # When the client is unable to connect to the OM (either because of network 
> issues or because the OM is down). The client retry proxy provider should 
> failover to next OM in the cluster.
> # When OM Ratis Client receives a response from the Ratis server for its 
> request, it also gets the LeaderId of server which processed this request 
> (the current Leader OM nodeId). This information should be propagated back to 
> the client. The Client failover Proxy provider should failover to the leader 
> OM node. This helps avoid an extra hop from Follower OM Ratis Client to 
> Leader OM Ratis server for every request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1072) Implement RetryProxy and FailoverProxy for OM client

2019-03-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1072?focusedWorklogId=207526=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-207526
 ]

ASF GitHub Bot logged work on HDDS-1072:


Author: ASF GitHub Bot
Created on: 04/Mar/19 23:52
Start Date: 04/Mar/19 23:52
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #548: Revert 
"HDDS-1072. Implement RetryProxy and FailoverProxy for OM clie…
URL: https://github.com/apache/hadoop/pull/548
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 207526)
Time Spent: 40m  (was: 0.5h)

> Implement RetryProxy and FailoverProxy for OM client
> 
>
> Key: HDDS-1072
> URL: https://issues.apache.org/jira/browse/HDDS-1072
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1072.001.patch, HDDS-1072.002.patch, 
> HDDS-1072.003.patch, HDDS-1072.004.patch, HDDS-1072.005.patch, 
> HDDS-1072.006.patch, HDDS-1072.007.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> RPC Client should implement a retry and failover proxy provider to failover 
> between OM Ratis clients. The failover should occur in two scenarios:
> # When the client is unable to connect to the OM (either because of network 
> issues or because the OM is down). The client retry proxy provider should 
> failover to next OM in the cluster.
> # When OM Ratis Client receives a response from the Ratis server for its 
> request, it also gets the LeaderId of server which processed this request 
> (the current Leader OM nodeId). This information should be propagated back to 
> the client. The Client failover Proxy provider should failover to the leader 
> OM node. This helps avoid an extra hop from Follower OM Ratis Client to 
> Leader OM Ratis server for every request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14333) Datanode fails to start if any disk has errors during Namenode registration

2019-03-04 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783901#comment-16783901
 ] 

Wei-Chiu Chuang commented on HDFS-14333:


Fix looks good.

Regarding the test strategy, I guess you can mock the data object within 
DataNode, let it throw a AddBlockPoolException when addBlockPool() is called. 

The data object is instantiated from the factory method, and you can let the 
factory return a custom FSDatasetSpi implementation by setting configuration 
{{dfs.datanode.fsdataset.factory}}.

As an example, {{TestRead#testInterruptReader}} has the following

 
{code:java}
conf.set(DFSConfigKeys.DFS_DATANODE_FSDATASET_FACTORY_KEY,
 DelayedSimulatedFSDataset.Factory.class.getName());{code}
 

 

> Datanode fails to start if any disk has errors during Namenode registration
> ---
>
> Key: HDFS-14333
> URL: https://issues.apache.org/jira/browse/HDFS-14333
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14333.001.patch
>
>
> This is closely related to HDFS-9908, where it was reported that a datanode 
> would fail to start if an IO error occurred on a single disk when running du 
> during Datanode registration. That Jira was closed due to HADOOP-12973 which 
> refactored how du is called and prevents any exception being thrown. However 
> this problem can still occur if the volume has errors (eg permission or 
> filesystem corruption) when the disk is scanned to load all the replicas. The 
> method chain is:
> DataNode.initBlockPool -> FSDataSetImpl.addBlockPool -> 
> FSVolumeList.getAllVolumesMap -> Throws exception which goes unhandled.
> The DN logs will contain a stack trace for the problem volume, so the 
> workaround is to remove the volume from the DN config and the DN will start, 
> but the logs are a little confusing, so its always not obvious what the issue 
> is.
> These are the cut down logs from an occurrence of this issue.
> {code}
> 2019-03-01 08:58:24,830 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-240961797-x.x.x.x-1392827522027 on volume 
> /data/18/dfs/dn/current...
> ...
> 2019-03-01 08:58:27,029 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Could 
> not get disk usage information
> ExitCodeException exitCode=1: du: cannot read directory 
> `/data/18/dfs/dn/current/BP-240961797-x.x.x.x-1392827522027/current/finalized/subdir149/subdir215':
>  Permission denied
> du: cannot read directory 
> `/data/18/dfs/dn/current/BP-240961797-x.x.x.x-1392827522027/current/finalized/subdir149/subdir213':
>  Permission denied
> du: cannot read directory 
> `/data/18/dfs/dn/current/BP-240961797-x.x.x.x-1392827522027/current/finalized/subdir97/subdir25':
>  Permission denied
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:601)
>   at org.apache.hadoop.util.Shell.run(Shell.java:504)
>   at org.apache.hadoop.fs.DU$DUShell.startRefresh(DU.java:61)
>   at org.apache.hadoop.fs.DU.refresh(DU.java:53)
>   at 
> org.apache.hadoop.fs.CachingGetSpaceUsed.init(CachingGetSpaceUsed.java:84)
>   at 
> org.apache.hadoop.fs.GetSpaceUsed$Builder.build(GetSpaceUsed.java:166)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.(BlockPoolSlice.java:145)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addBlockPool(FsVolumeImpl.java:881)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$2.run(FsVolumeList.java:412)
> ...
> 2019-03-01 08:58:27,043 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time 
> taken to scan block pool BP-240961797-x.x.x.x-1392827522027 on 
> /data/18/dfs/dn/current: 2202ms
> {code}
> So we can see a du error occurred, was logged but not re-thrown (due to 
> HADOOP-12973) and the blockpool scan completed. However then in the 'add 
> replicas to map' logic, we got another exception stemming from the same 
> problem:
> {code}
> 2019-03-01 08:58:27,564 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding 
> replicas to map for block pool BP-240961797-x.x.x.x-1392827522027 on volume 
> /data/18/dfs/dn/current...
> ...
> 2019-03-01 08:58:31,155 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Caught 
> exception while adding replicas from /data/18/dfs/dn/current. Will throw 
> later.
> java.io.IOException: Invalid directory or I/O error occurred for dir: 
> /data/18/dfs/dn/current/BP-240961797-x.x.x.x-1392827522027/current/finalized/subdir149/subdir215
>   at org.apache.hadoop.fs.FileUtil.listFiles(FileUtil.java:1167)
>   

[jira] [Commented] (HDDS-1072) Implement RetryProxy and FailoverProxy for OM client

2019-03-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783897#comment-16783897
 ] 

Hadoop QA commented on HDDS-1072:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 8s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 17m 
12s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 16m  
7s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 16m  7s{color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 16m  7s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 22s{color} | {color:orange} root: The patch generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 15s{color} 
| {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
50s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
48s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m  2s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense 

[jira] [Updated] (HDFS-12345) Scale testing HDFS NameNode with real metadata and workloads (Dynamometer)

2019-03-04 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-12345:
--
Attachment: HDFS-12345.002.patch

> Scale testing HDFS NameNode with real metadata and workloads (Dynamometer)
> --
>
> Key: HDFS-12345
> URL: https://issues.apache.org/jira/browse/HDFS-12345
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode, test
>Reporter: Zhe Zhang
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-12345.000.patch, HDFS-12345.001.patch, 
> HDFS-12345.002.patch
>
>
> Dynamometer has now been open sourced on our [GitHub 
> page|https://github.com/linkedin/dynamometer]. Read more at our [recent blog 
> post|https://engineering.linkedin.com/blog/2018/02/dynamometer--scale-testing-hdfs-on-minimal-hardware-with-maximum].
> To encourage getting the tool into the open for others to use as quickly as 
> possible, we went through our standard open sourcing process of releasing on 
> GitHub. However we are interested in the possibility of donating this to 
> Apache as part of Hadoop itself and would appreciate feedback on whether or 
> not this is something that would be supported by the community.
> Also of note, previous [discussions on the dev mail 
> lists|http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201707.mbox/%3c98fceffa-faff-4cf1-a14d-4faab6567...@gmail.com%3e]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12345) Scale testing HDFS NameNode with real metadata and workloads (Dynamometer)

2019-03-04 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783895#comment-16783895
 ] 

Siyao Meng commented on HDFS-12345:
---

Uploaded rev 002 to integrate my additions from the above comment (on top of 
rev 001).

> Scale testing HDFS NameNode with real metadata and workloads (Dynamometer)
> --
>
> Key: HDFS-12345
> URL: https://issues.apache.org/jira/browse/HDFS-12345
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode, test
>Reporter: Zhe Zhang
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-12345.000.patch, HDFS-12345.001.patch, 
> HDFS-12345.002.patch
>
>
> Dynamometer has now been open sourced on our [GitHub 
> page|https://github.com/linkedin/dynamometer]. Read more at our [recent blog 
> post|https://engineering.linkedin.com/blog/2018/02/dynamometer--scale-testing-hdfs-on-minimal-hardware-with-maximum].
> To encourage getting the tool into the open for others to use as quickly as 
> possible, we went through our standard open sourcing process of releasing on 
> GitHub. However we are interested in the possibility of donating this to 
> Apache as part of Hadoop itself and would appreciate feedback on whether or 
> not this is something that would be supported by the community.
> Also of note, previous [discussions on the dev mail 
> lists|http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201707.mbox/%3c98fceffa-faff-4cf1-a14d-4faab6567...@gmail.com%3e]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12345) Scale testing HDFS NameNode with real metadata and workloads (Dynamometer)

2019-03-04 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-12345:
--
Status: In Progress  (was: Patch Available)

> Scale testing HDFS NameNode with real metadata and workloads (Dynamometer)
> --
>
> Key: HDFS-12345
> URL: https://issues.apache.org/jira/browse/HDFS-12345
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode, test
>Reporter: Zhe Zhang
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-12345.000.patch, HDFS-12345.001.patch
>
>
> Dynamometer has now been open sourced on our [GitHub 
> page|https://github.com/linkedin/dynamometer]. Read more at our [recent blog 
> post|https://engineering.linkedin.com/blog/2018/02/dynamometer--scale-testing-hdfs-on-minimal-hardware-with-maximum].
> To encourage getting the tool into the open for others to use as quickly as 
> possible, we went through our standard open sourcing process of releasing on 
> GitHub. However we are interested in the possibility of donating this to 
> Apache as part of Hadoop itself and would appreciate feedback on whether or 
> not this is something that would be supported by the community.
> Also of note, previous [discussions on the dev mail 
> lists|http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201707.mbox/%3c98fceffa-faff-4cf1-a14d-4faab6567...@gmail.com%3e]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14335) RBF: Fix heartbeat typos in the Router.

2019-03-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783881#comment-16783881
 ] 

Íñigo Goiri commented on HDFS-14335:


Thanks [~crh] for the fix.
Committed to HDFS-13891.

> RBF: Fix heartbeat typos in the Router.
> ---
>
> Key: HDFS-14335
> URL: https://issues.apache.org/jira/browse/HDFS-14335
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Trivial
> Fix For: HDFS-13891
>
> Attachments: HDFS-14335-HDFS-13891.001.patch
>
>
> Found this while debugging a namenode heartbeat related issue. Weirdly search 
> for "heartbeat" wasn't giving desired results till I realized this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1136) Add metric counters to capture the RocksDB checkpointing statistics.

2019-03-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1136?focusedWorklogId=207503=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-207503
 ]

ASF GitHub Bot logged work on HDDS-1136:


Author: ASF GitHub Bot
Created on: 04/Mar/19 23:16
Start Date: 04/Mar/19 23:16
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #537: HDDS-1136 : Add 
metric counters to capture the RocksDB checkpointing statistics.
URL: https://github.com/apache/hadoop/pull/537#issuecomment-469463610
 
 
   Patch committed to trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 207503)
Time Spent: 2h 10m  (was: 2h)

> Add metric counters to capture the RocksDB checkpointing statistics.
> 
>
> Key: HDDS-1136
> URL: https://issues.apache.org/jira/browse/HDDS-1136
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-1136-000.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> As per the discussion with [~anu] on HDDS-1085, this JIRA tracks the effort 
> to add metric counters to capture ROcksDB checkpointing performance. 
> From [~anu]'s comments, it might be interesting to have 3 counters – or a map 
> of counters.
> * How much time are we taking for each CheckPoint
> * How much time are we taking for each Tar operation – along with sizes
> * How much time are we taking for the transfer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1136) Add metric counters to capture the RocksDB checkpointing statistics.

2019-03-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1136?focusedWorklogId=207502=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-207502
 ]

ASF GitHub Bot logged work on HDDS-1136:


Author: ASF GitHub Bot
Created on: 04/Mar/19 23:16
Start Date: 04/Mar/19 23:16
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #537: HDDS-1136 : 
Add metric counters to capture the RocksDB checkpointing statistics.
URL: https://github.com/apache/hadoop/pull/537
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 207502)
Time Spent: 2h  (was: 1h 50m)

> Add metric counters to capture the RocksDB checkpointing statistics.
> 
>
> Key: HDDS-1136
> URL: https://issues.apache.org/jira/browse/HDDS-1136
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-1136-000.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> As per the discussion with [~anu] on HDDS-1085, this JIRA tracks the effort 
> to add metric counters to capture ROcksDB checkpointing performance. 
> From [~anu]'s comments, it might be interesting to have 3 counters – or a map 
> of counters.
> * How much time are we taking for each CheckPoint
> * How much time are we taking for each Tar operation – along with sizes
> * How much time are we taking for the transfer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14335) RBF: Fix heartbeat typos in the Router.

2019-03-04 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14335:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-13891
   Status: Resolved  (was: Patch Available)

> RBF: Fix heartbeat typos in the Router.
> ---
>
> Key: HDFS-14335
> URL: https://issues.apache.org/jira/browse/HDFS-14335
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Trivial
> Fix For: HDFS-13891
>
> Attachments: HDFS-14335-HDFS-13891.001.patch
>
>
> Found this while debugging a namenode heartbeat related issue. Weirdly search 
> for "heartbeat" wasn't giving desired results till I realized this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1072) Implement RetryProxy and FailoverProxy for OM client

2019-03-04 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783861#comment-16783861
 ] 

Arpit Agarwal commented on HDDS-1072:
-

+1 pending Jenkins.

> Implement RetryProxy and FailoverProxy for OM client
> 
>
> Key: HDDS-1072
> URL: https://issues.apache.org/jira/browse/HDDS-1072
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1072.001.patch, HDDS-1072.002.patch, 
> HDDS-1072.003.patch, HDDS-1072.004.patch, HDDS-1072.005.patch, 
> HDDS-1072.006.patch, HDDS-1072.007.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> RPC Client should implement a retry and failover proxy provider to failover 
> between OM Ratis clients. The failover should occur in two scenarios:
> # When the client is unable to connect to the OM (either because of network 
> issues or because the OM is down). The client retry proxy provider should 
> failover to next OM in the cluster.
> # When OM Ratis Client receives a response from the Ratis server for its 
> request, it also gets the LeaderId of server which processed this request 
> (the current Leader OM nodeId). This information should be propagated back to 
> the client. The Client failover Proxy provider should failover to the leader 
> OM node. This helps avoid an extra hop from Follower OM Ratis Client to 
> Leader OM Ratis server for every request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14270) [SBN Read] StateId and TrasactionId not present in Trace level logging

2019-03-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783873#comment-16783873
 ] 

Hadoop QA commented on HDFS-14270:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 204 unchanged - 0 fixed = 205 total (was 204) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
58s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14270 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961056/HDFS-14270.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e11cf7cc7858 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cb0fa0c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26402/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26402/testReport/ |
| Max. process+thread count | 1464 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 

[jira] [Updated] (HDDS-807) Period should be an invalid character in bucket names

2019-03-04 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-807:
-
Attachment: HDDS-807.04.patch

> Period should be an invalid character in bucket names
> -
>
> Key: HDDS-807
> URL: https://issues.apache.org/jira/browse/HDDS-807
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Siddharth Wagle
>Priority: Critical
>  Labels: newbie
> Attachments: HDDS-807.01.patch, HDDS-807.02.patch, HDDS-807.03.patch, 
> HDDS-807.04.patch
>
>
> ozonefs paths use the following syntax:
> - o3fs://bucket.volume/..
> The OM host and port are read from configuration. We cannot specify a target 
> filesystem with a fully qualified path. E.g. 
> _o3fs://bucket.volume.om-host.example.com:9862/. Hence we cannot hand a fully 
> qualified URL with OM hostname to a client without setting up config files 
> beforehand. This is inconvenient. It also means there is no way to perform a 
> distcp from one Ozone cluster to another.
> We need a way to support fully qualified paths with OM hostname and port 
> _bucket.volume.om-host.example.com_. If we allow periods in bucketnames, then 
> such fully qualified paths cannot be parsed unambiguously. However if er 
> disallow periods, then we can support all of the following paths 
> unambiguously.
>  # *o3fs://bucket.volume/key* - The authority has only two period-separated 
> components. These must be bucket and volume name respectively.
>  # *o3fs://bucket.volume.om-host.example.com/key* - The authority has more 
> than two components. The first two must be bucket and volume, the rest must 
> be the hostname.
>  # *o3fs://bucket.volume.om-host.example.com:5678/key* - Similar to #2, 
> except with a port number.
>  
> Open question is around HA support. I believe for HA we will have to 
> introduce the notion of a _nameservice_, similar to HDFS nameservice. This 
> will allow a fourth kind of Ozone URL:
>  - *o3fs://bucket.volume.ns1/key* - How do we distinguish this from #3 above? 
> One way could be to find if _ns1_ is known as an Ozone nameservice via 
> configuration. If so then treat it as the name of an HA service. Else treat 
> it as a hostname.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14335) RBF: Fix heartbeat typos in the Router.

2019-03-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783866#comment-16783866
 ] 

Íñigo Goiri commented on HDFS-14335:


+1 on [^HDFS-14335-HDFS-13891.001.patch].

> RBF: Fix heartbeat typos in the Router.
> ---
>
> Key: HDFS-14335
> URL: https://issues.apache.org/jira/browse/HDFS-14335
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Trivial
> Attachments: HDFS-14335-HDFS-13891.001.patch
>
>
> Found this while debugging a namenode heartbeat related issue. Weirdly search 
> for "heartbeat" wasn't giving desired results till I realized this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14334) RBF: Use human readable format for long numbers in the Router UI

2019-03-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783864#comment-16783864
 ] 

Hadoop QA commented on HDFS-14334:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
20s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
34m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14334 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961060/HDFS-14334-HDFS-13891.001.patch
 |
| Optional Tests |  dupname  asflicense  shadedclient  |
| uname | Linux 54ba50a9eac0 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 1d1dc8e |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 306 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26404/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Use human readable format for long numbers in the Router UI
> 
>
> Key: HDFS-14334
> URL: https://issues.apache.org/jira/browse/HDFS-14334
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14334-HDFS-13891.000.patch, 
> HDFS-14334-HDFS-13891.001.patch, block-files-numbers.png
>
>
> Currently, for the number of files, we show the raw number. When it starts to 
> get into millions, it is hard to read. We should use a human readable version 
> similar to what we do with PB, GB, MB,...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-807) Period should be an invalid character in bucket names

2019-03-04 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783859#comment-16783859
 ] 

Siddharth Wagle edited comment on HDDS-807 at 3/4/19 10:53 PM:
---

Fix in 04 version for mvn install issues.


was (Author: swagle):
Fix for mvn install issues.

> Period should be an invalid character in bucket names
> -
>
> Key: HDDS-807
> URL: https://issues.apache.org/jira/browse/HDDS-807
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Siddharth Wagle
>Priority: Critical
>  Labels: newbie
> Attachments: HDDS-807.01.patch, HDDS-807.02.patch, HDDS-807.03.patch, 
> HDDS-807.04.patch
>
>
> ozonefs paths use the following syntax:
> - o3fs://bucket.volume/..
> The OM host and port are read from configuration. We cannot specify a target 
> filesystem with a fully qualified path. E.g. 
> _o3fs://bucket.volume.om-host.example.com:9862/. Hence we cannot hand a fully 
> qualified URL with OM hostname to a client without setting up config files 
> beforehand. This is inconvenient. It also means there is no way to perform a 
> distcp from one Ozone cluster to another.
> We need a way to support fully qualified paths with OM hostname and port 
> _bucket.volume.om-host.example.com_. If we allow periods in bucketnames, then 
> such fully qualified paths cannot be parsed unambiguously. However if er 
> disallow periods, then we can support all of the following paths 
> unambiguously.
>  # *o3fs://bucket.volume/key* - The authority has only two period-separated 
> components. These must be bucket and volume name respectively.
>  # *o3fs://bucket.volume.om-host.example.com/key* - The authority has more 
> than two components. The first two must be bucket and volume, the rest must 
> be the hostname.
>  # *o3fs://bucket.volume.om-host.example.com:5678/key* - Similar to #2, 
> except with a port number.
>  
> Open question is around HA support. I believe for HA we will have to 
> introduce the notion of a _nameservice_, similar to HDFS nameservice. This 
> will allow a fourth kind of Ozone URL:
>  - *o3fs://bucket.volume.ns1/key* - How do we distinguish this from #3 above? 
> One way could be to find if _ns1_ is known as an Ozone nameservice via 
> configuration. If so then treat it as the name of an HA service. Else treat 
> it as a hostname.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-807) Period should be an invalid character in bucket names

2019-03-04 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783859#comment-16783859
 ] 

Siddharth Wagle commented on HDDS-807:
--

Fix for mvn install issues.

> Period should be an invalid character in bucket names
> -
>
> Key: HDDS-807
> URL: https://issues.apache.org/jira/browse/HDDS-807
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Siddharth Wagle
>Priority: Critical
>  Labels: newbie
> Attachments: HDDS-807.01.patch, HDDS-807.02.patch, HDDS-807.03.patch, 
> HDDS-807.04.patch
>
>
> ozonefs paths use the following syntax:
> - o3fs://bucket.volume/..
> The OM host and port are read from configuration. We cannot specify a target 
> filesystem with a fully qualified path. E.g. 
> _o3fs://bucket.volume.om-host.example.com:9862/. Hence we cannot hand a fully 
> qualified URL with OM hostname to a client without setting up config files 
> beforehand. This is inconvenient. It also means there is no way to perform a 
> distcp from one Ozone cluster to another.
> We need a way to support fully qualified paths with OM hostname and port 
> _bucket.volume.om-host.example.com_. If we allow periods in bucketnames, then 
> such fully qualified paths cannot be parsed unambiguously. However if er 
> disallow periods, then we can support all of the following paths 
> unambiguously.
>  # *o3fs://bucket.volume/key* - The authority has only two period-separated 
> components. These must be bucket and volume name respectively.
>  # *o3fs://bucket.volume.om-host.example.com/key* - The authority has more 
> than two components. The first two must be bucket and volume, the rest must 
> be the hostname.
>  # *o3fs://bucket.volume.om-host.example.com:5678/key* - Similar to #2, 
> except with a port number.
>  
> Open question is around HA support. I believe for HA we will have to 
> introduce the notion of a _nameservice_, similar to HDFS nameservice. This 
> will allow a fourth kind of Ozone URL:
>  - *o3fs://bucket.volume.ns1/key* - How do we distinguish this from #3 above? 
> One way could be to find if _ns1_ is known as an Ozone nameservice via 
> configuration. If so then treat it as the name of an HA service. Else treat 
> it as a hostname.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1193) Refactor ContainerChillModeRule and DatanodeChillMode rule

2019-03-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1193?focusedWorklogId=207489=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-207489
 ]

ASF GitHub Bot logged work on HDDS-1193:


Author: ASF GitHub Bot
Created on: 04/Mar/19 22:51
Start Date: 04/Mar/19 22:51
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #534: HDDS-1193. 
Refactor ContainerChillModeRule and DatanodeChillMode rule.
URL: https://github.com/apache/hadoop/pull/534#issuecomment-469454263
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1073 | trunk passed |
   | +1 | compile | 43 | trunk passed |
   | +1 | checkstyle | 19 | trunk passed |
   | +1 | mvnsite | 29 | trunk passed |
   | +1 | shadedclient | 641 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 42 | trunk passed |
   | +1 | javadoc | 24 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 35 | the patch passed |
   | +1 | compile | 24 | the patch passed |
   | +1 | javac | 24 | the patch passed |
   | +1 | checkstyle | 15 | the patch passed |
   | +1 | mvnsite | 27 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 730 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 46 | the patch passed |
   | +1 | javadoc | 19 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 132 | server-scm in the patch passed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 3049 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-534/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/534 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux a39858692bb7 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cb0fa0c |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-534/2/testReport/ |
   | Max. process+thread count | 560 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-534/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 207489)
Time Spent: 1h 20m  (was: 1h 10m)

> Refactor ContainerChillModeRule and DatanodeChillMode rule
> --
>
> Key: HDDS-1193
> URL: https://issues.apache.org/jira/browse/HDDS-1193
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> The main intention of this Jira is to have all rules look in a similar way of 
> handling events.
> In this Jira, did the following changes:
>  # Both DatanodeRule and ContainerRule implements EventHandler and listen for 
> NodeRegistrationContainerReport
>  # Update ScmChillModeManager not to handle any events. (As each rule need to 
> handle an event, and work on that rule)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-623) On SCM UI, Node Manager info is empty

2019-03-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783855#comment-16783855
 ] 

Hudson commented on HDDS-623:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16120 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16120/])
HDDS-623. On SCM UI, Node Manager info is empty (#523) (7813154+ajayydv: rev 
90c37ac40de5827335b4f739e6330e1d3558971e)
* (edit) hadoop-hdds/server-scm/src/main/resources/webapps/scm/scm-overview.html
* (edit) hadoop-hdds/server-scm/src/main/resources/webapps/scm/scm.js


> On SCM UI, Node Manager info is empty
> -
>
> Key: HDDS-623
> URL: https://issues.apache.org/jira/browse/HDDS-623
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screen Shot 2018-10-10 at 4.19.59 PM.png
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Fields like below are empty
> Node Manager: Minimum chill mode nodes 
> Node Manager: Out-of-node chill mode 
> Node Manager: Chill mode status 
> Node Manager: Manual chill mode
> Please see attached screenshot !Screen Shot 2018-10-10 at 4.19.59 PM.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14335) RBF: Fix heartbeat typos in the Router.

2019-03-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783848#comment-16783848
 ] 

Hadoop QA commented on HDFS-14335:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
36s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m  
9s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14335 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961053/HDFS-14335-HDFS-13891.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3e476fae18a2 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / fc8dd2e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26400/testReport/ |
| Max. process+thread count | 1504 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26400/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Fix heartbeat typos in the Router.
> ---
>
> Key: 

[jira] [Updated] (HDDS-1217) Refactor ChillMode rules and chillmode manager

2019-03-04 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1217:
-
Summary: Refactor ChillMode rules and chillmode manager  (was: Refactor 
ChillMode manager)

> Refactor ChillMode rules and chillmode manager
> --
>
> Key: HDDS-1217
> URL: https://issues.apache.org/jira/browse/HDDS-1217
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> # Make the chillmodeExitRule abstract class and move common logic for all 
> rules into this.
>  # Update test's for chill mode accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-623) On SCM UI, Node Manager info is empty

2019-03-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-623?focusedWorklogId=207481=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-207481
 ]

ASF GitHub Bot logged work on HDDS-623:
---

Author: ASF GitHub Bot
Created on: 04/Mar/19 22:35
Start Date: 04/Mar/19 22:35
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #523: HDDS-623. On 
SCM UI, Node Manager info is empty
URL: https://github.com/apache/hadoop/pull/523
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 207481)
Time Spent: 1h 40m  (was: 1.5h)

> On SCM UI, Node Manager info is empty
> -
>
> Key: HDDS-623
> URL: https://issues.apache.org/jira/browse/HDDS-623
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screen Shot 2018-10-10 at 4.19.59 PM.png
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Fields like below are empty
> Node Manager: Minimum chill mode nodes 
> Node Manager: Out-of-node chill mode 
> Node Manager: Chill mode status 
> Node Manager: Manual chill mode
> Please see attached screenshot !Screen Shot 2018-10-10 at 4.19.59 PM.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-623) On SCM UI, Node Manager info is empty

2019-03-04 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-623:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~elek] thanks for the patch. Thanks all for review.

> On SCM UI, Node Manager info is empty
> -
>
> Key: HDDS-623
> URL: https://issues.apache.org/jira/browse/HDDS-623
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screen Shot 2018-10-10 at 4.19.59 PM.png
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Fields like below are empty
> Node Manager: Minimum chill mode nodes 
> Node Manager: Out-of-node chill mode 
> Node Manager: Chill mode status 
> Node Manager: Manual chill mode
> Please see attached screenshot !Screen Shot 2018-10-10 at 4.19.59 PM.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-623) On SCM UI, Node Manager info is empty

2019-03-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-623?focusedWorklogId=207480=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-207480
 ]

ASF GitHub Bot logged work on HDDS-623:
---

Author: ASF GitHub Bot
Created on: 04/Mar/19 22:34
Start Date: 04/Mar/19 22:34
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on issue #523: HDDS-623. On SCM UI, 
Node Manager info is empty
URL: https://github.com/apache/hadoop/pull/523#issuecomment-469449649
 
 
   +1
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 207480)
Time Spent: 1.5h  (was: 1h 20m)

> On SCM UI, Node Manager info is empty
> -
>
> Key: HDDS-623
> URL: https://issues.apache.org/jira/browse/HDDS-623
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screen Shot 2018-10-10 at 4.19.59 PM.png
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Fields like below are empty
> Node Manager: Minimum chill mode nodes 
> Node Manager: Out-of-node chill mode 
> Node Manager: Chill mode status 
> Node Manager: Manual chill mode
> Please see attached screenshot !Screen Shot 2018-10-10 at 4.19.59 PM.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1217) Refactor ChillMode manager

2019-03-04 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1217:


 Summary: Refactor ChillMode manager
 Key: HDDS-1217
 URL: https://issues.apache.org/jira/browse/HDDS-1217
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


# Make the chillmodeExitRule abstract class and move common logic for all rules 
into this.
 # Update test's for chill mode accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14111) hdfsOpenFile on HDFS causes unnecessary IO from file offset 0

2019-03-04 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783841#comment-16783841
 ] 

Sahil Takiar commented on HDFS-14111:
-

I think this patch should be good to merge.

Talked to a few other folks about {{errno}} handling, and given that the C docs 
say {{The value in errno is significant only when the return value of the call 
indicated an error (i.e., -1 from most system calls; -1 or NULL from most 
library functions); a function that succeeds is allowed to change errno.}} I 
think the test changes made as part of this patch should be fine.

> hdfsOpenFile on HDFS causes unnecessary IO from file offset 0
> -
>
> Key: HDFS-14111
> URL: https://issues.apache.org/jira/browse/HDFS-14111
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, libhdfs
>Affects Versions: 3.2.0
>Reporter: Todd Lipcon
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HDFS-14111.001.patch, HDFS-14111.002.patch, 
> HDFS-14111.003.patch
>
>
> hdfsOpenFile() calls readDirect() with a 0-length argument in order to check 
> whether the underlying stream supports bytebuffer reads. With DFSInputStream, 
> the read(0) isn't short circuited, and results in the DFSClient opening a 
> block reader. In the case of a remote block, the block reader will actually 
> issue a read of the whole block, causing the datanode to perform unnecessary 
> IO and network transfers in order to fill up the client's TCP buffers. This 
> causes performance degradation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1193) Refactor ContainerChillModeRule and DatanodeChillMode rule

2019-03-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1193?focusedWorklogId=207470=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-207470
 ]

ASF GitHub Bot logged work on HDDS-1193:


Author: ASF GitHub Bot
Created on: 04/Mar/19 22:25
Start Date: 04/Mar/19 22:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #534: HDDS-1193. 
Refactor ContainerChillModeRule and DatanodeChillMode rule.
URL: https://github.com/apache/hadoop/pull/534#issuecomment-469446813
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 964 | trunk passed |
   | +1 | compile | 46 | trunk passed |
   | +1 | checkstyle | 21 | trunk passed |
   | +1 | mvnsite | 32 | trunk passed |
   | +1 | shadedclient | 703 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 40 | trunk passed |
   | +1 | javadoc | 22 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 34 | the patch passed |
   | +1 | compile | 24 | the patch passed |
   | +1 | javac | 24 | the patch passed |
   | +1 | checkstyle | 15 | the patch passed |
   | +1 | mvnsite | 25 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 713 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 47 | the patch passed |
   | +1 | javadoc | 21 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 129 | server-scm in the patch failed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 2974 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.chillmode.TestSCMChillModeManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-534/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/534 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 29f0611fd368 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cb0fa0c |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-534/1/artifact/out/patch-unit-hadoop-hdds_server-scm.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-534/1/testReport/ |
   | Max. process+thread count | 531 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-534/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 207470)
Time Spent: 1h 10m  (was: 1h)

> Refactor ContainerChillModeRule and DatanodeChillMode rule
> --
>
> Key: HDDS-1193
> URL: https://issues.apache.org/jira/browse/HDDS-1193
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The main intention of this Jira is to have all rules look in a similar way of 
> handling events.
> In this Jira, did the following changes:
>  # Both DatanodeRule and ContainerRule implements EventHandler and listen for 
> NodeRegistrationContainerReport
>  # Update ScmChillModeManager not to handle any events. (As each rule need to 
> handle an event, and work on that rule)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HDDS-807) Period should be an invalid character in bucket names

2019-03-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783837#comment-16783837
 ] 

Hadoop QA commented on HDDS-807:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
29s{color} | {color:red} ozonefs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
18s{color} | {color:green} ozonefs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-807 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961052/HDDS-807.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 32ceddf75e7b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cb0fa0c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2439/artifact/out/patch-mvninstall-hadoop-ozone_ozonefs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2439/testReport/ |
| Max. process+thread count | 2903 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/ozonefs U: hadoop-ozone/ozonefs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2439/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Commented] (HDDS-807) Period should be an invalid character in bucket names

2019-03-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783833#comment-16783833
 ] 

Hadoop QA commented on HDDS-807:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
37s{color} | {color:red} ozonefs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
51s{color} | {color:green} ozonefs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-807 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961051/HDDS-807.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux feaca3b6e862 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cb0fa0c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2437/artifact/out/patch-mvninstall-hadoop-ozone_ozonefs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2437/testReport/ |
| Max. process+thread count | 2681 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/ozonefs U: hadoop-ozone/ozonefs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2437/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Commented] (HDDS-699) Detect Ozone Network topology

2019-03-04 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783831#comment-16783831
 ] 

Tsz Wo Nicholas Sze commented on HDDS-699:
--

[~Sammi], I have checked the 03 patch.  It looks good in general!  Please fix 
the findbugs and other warnings.  I will check the new patch in more details.  
Thanks a lot.

> Detect Ozone Network topology
> -
>
> Key: HDDS-699
> URL: https://issues.apache.org/jira/browse/HDDS-699
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Attachments: HDDS-699.00.patch, HDDS-699.01.patch, HDDS-699.02.patch, 
> HDDS-699.03.patch
>
>
> Traditionally this has been implemented in Hadoop via script or customizable 
> java class. One thing we want to add here is the flexible multi-level support 
> instead of fixed levels like DC/Rack/NG/Node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1216) Change name of ozoneManager service in docker compose files to om

2019-03-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1216?focusedWorklogId=207454=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-207454
 ]

ASF GitHub Bot logged work on HDDS-1216:


Author: ASF GitHub Bot
Created on: 04/Mar/19 21:56
Start Date: 04/Mar/19 21:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #553: HDDS-1216. Change 
name of ozoneManager service in docker compose file…
URL: https://github.com/apache/hadoop/pull/553#issuecomment-469437517
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 20 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1091 | trunk passed |
   | -1 | compile | 24 | dist in trunk failed. |
   | -1 | mvnsite | 26 | dist in trunk failed. |
   | +1 | shadedclient | 706 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 20 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 20 | dist in the patch failed. |
   | -1 | compile | 20 | dist in the patch failed. |
   | -1 | javac | 20 | dist in the patch failed. |
   | -1 | mvnsite | 21 | dist in the patch failed. |
   | +1 | shellcheck | 1 | There were no new shellcheck issues. |
   | +1 | shelldocs | 15 | There were no new shelldocs issues. |
   | -1 | whitespace | 0 | The patch 3  line(s) with tabs. |
   | +1 | shadedclient | 838 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 17 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 22 | dist in the patch failed. |
   | +1 | asflicense | 27 | The patch does not generate ASF License warnings. |
   | | | 3001 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-553/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/553 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  yamllint  shellcheck  shelldocs  |
   | uname | Linux 1ea77581ba55 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cb0fa0c |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-553/1/artifact/out/branch-compile-hadoop-ozone_dist.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-553/1/artifact/out/branch-mvnsite-hadoop-ozone_dist.txt
 |
   | shellcheck | v0.4.6 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-553/1/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-553/1/artifact/out/patch-compile-hadoop-ozone_dist.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-553/1/artifact/out/patch-compile-hadoop-ozone_dist.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-553/1/artifact/out/patch-mvnsite-hadoop-ozone_dist.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-553/1/artifact/out/whitespace-tabs.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-553/1/artifact/out/patch-unit-hadoop-ozone_dist.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-553/1/testReport/ |
   | Max. process+thread count | 340 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-553/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 207454)
Time Spent: 50m  (was: 40m)

> Change name of ozoneManager service in docker compose files to om
> 

[jira] [Work logged] (HDDS-1193) Refactor ContainerChillModeRule and DatanodeChillMode rule

2019-03-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1193?focusedWorklogId=207457=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-207457
 ]

ASF GitHub Bot logged work on HDDS-1193:


Author: ASF GitHub Bot
Created on: 04/Mar/19 21:59
Start Date: 04/Mar/19 21:59
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #534: HDDS-1193. 
Refactor ContainerChillModeRule and DatanodeChillMode rule.
URL: https://github.com/apache/hadoop/pull/534#issuecomment-469438577
 
 
   Thank You @ajayydv  for offline discussion.
   Updated the code.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 207457)
Time Spent: 1h  (was: 50m)

> Refactor ContainerChillModeRule and DatanodeChillMode rule
> --
>
> Key: HDDS-1193
> URL: https://issues.apache.org/jira/browse/HDDS-1193
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The main intention of this Jira is to have all rules look in a similar way of 
> handling events.
> In this Jira, did the following changes:
>  # Both DatanodeRule and ContainerRule implements EventHandler and listen for 
> NodeRegistrationContainerReport
>  # Update ScmChillModeManager not to handle any events. (As each rule need to 
> handle an event, and work on that rule)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14334) RBF: Use human readable format for long numbers in the Router UI

2019-03-04 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14334:
---
Attachment: HDFS-14334-HDFS-13891.001.patch

> RBF: Use human readable format for long numbers in the Router UI
> 
>
> Key: HDFS-14334
> URL: https://issues.apache.org/jira/browse/HDFS-14334
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14334-HDFS-13891.000.patch, 
> HDFS-14334-HDFS-13891.001.patch, block-files-numbers.png
>
>
> Currently, for the number of files, we show the raw number. When it starts to 
> get into millions, it is hard to read. We should use a human readable version 
> similar to what we do with PB, GB, MB,...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1216) Change name of ozoneManager service in docker compose files to om

2019-03-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1216?focusedWorklogId=207451=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-207451
 ]

ASF GitHub Bot logged work on HDDS-1216:


Author: ASF GitHub Bot
Created on: 04/Mar/19 21:56
Start Date: 04/Mar/19 21:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #553: HDDS-1216. 
Change name of ozoneManager service in docker compose file…
URL: https://github.com/apache/hadoop/pull/553#discussion_r262260757
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/security/ozone-secure.robot
 ##
 @@ -107,5 +140,16 @@ Run ozoneFS tests
 Execute   ls -l GET.txt
 ${rc}  ${result} =  Run And Return Rc And Outputozone fs -ls 
o3fs://abcde.pqrs/
 Should Be Equal As Integers ${rc}1
-Should contain${result} VOLUME_NOT_FOUND
+Should contain${result} not found
+
+
+Secure S3 test Failure
+Run Keyword Install aws cli
+${rc}  ${result} =  Run And Return Rc And Output  aws s3api --endpoint-url 
${ENDPOINT_URL} create-bucket --bucket bucket-test123
+Should Be True ${rc} > 0
 
 Review comment:
   whitespace:tabs in line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 207451)
Time Spent: 20m  (was: 10m)

> Change name of ozoneManager service in docker compose files to om
> -
>
> Key: HDDS-1216
> URL: https://issues.apache.org/jira/browse/HDDS-1216
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Change name of ozoneManager service in docker compose files to om for 
> consistency. (secure ozone compose file will use "om"). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1216) Change name of ozoneManager service in docker compose files to om

2019-03-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1216?focusedWorklogId=207452=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-207452
 ]

ASF GitHub Bot logged work on HDDS-1216:


Author: ASF GitHub Bot
Created on: 04/Mar/19 21:56
Start Date: 04/Mar/19 21:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #553: HDDS-1216. 
Change name of ozoneManager service in docker compose file…
URL: https://github.com/apache/hadoop/pull/553#discussion_r262260766
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/security/ozone-secure.robot
 ##
 @@ -16,14 +16,44 @@
 *** Settings ***
 Documentation   Smoke test to start cluster with docker-compose 
environments.
 Library OperatingSystem
+Library String
 Resource../commonlib.robot
 
+*** Variables ***
+${ENDPOINT_URL}   http://s3g:9878
+
+*** Keywords ***
+Install aws cli s3 centos
+Executesudo yum install -y awscli
+Executesudo yum install -y krb5-user
+Install aws cli s3 debian
+Executesudo apt-get install -y awscli
+Executesudo apt-get install -y krb5-user
+
+Install aws cli
+${rc}  ${output} = Run And Return Rc And 
Output   which apt-get
+Run Keyword if '${rc}' == '0'  Install aws cli s3 debian
+${rc}  ${output} = Run And Return Rc And 
Output   yum --help
+Run Keyword if '${rc}' == '0'  Install aws cli s3 centos
+
+Setup credentials
+${hostname}=Executehostname
+Execute kinit -k testuser/${hostname}@EXAMPLE.COM -t 
/etc/security/keytabs/testuser.keytab
+${result} = Executeozone sh s3 getsecret
+${accessKey} =  Get Regexp Matches ${result} 
(?<=awsAccessKey=).*
+${secret} = Get Regexp Matches${result} 
(?<=awsSecret=).*
 
 Review comment:
   whitespace:tabs in line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 207452)
Time Spent: 0.5h  (was: 20m)

> Change name of ozoneManager service in docker compose files to om
> -
>
> Key: HDDS-1216
> URL: https://issues.apache.org/jira/browse/HDDS-1216
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Change name of ozoneManager service in docker compose files to om for 
> consistency. (secure ozone compose file will use "om"). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14331) RBF: IOE While Removing Mount Entry

2019-03-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783819#comment-16783819
 ] 

Íñigo Goiri commented on HDFS-14331:


Thanks [~ayushtkn] for the patch and [~surendrasingh] for reporting.
Committed to HDFS-13891.

> RBF: IOE While Removing Mount Entry
> ---
>
> Key: HDFS-14331
> URL: https://issues.apache.org/jira/browse/HDFS-14331
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14331-HDFS-13891-01.patch, 
> HDFS-14331-HDFS-13891-02.patch, HDFS-14331-HDFS-13891-03.patch
>
>
> IOException while trying to remove the mount entry when the actual 
> destination doesn't exist.
> {noformat}
> java.io.IOException: Directory does not exist: /mount at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.valueOf(INodeDirectory.java:59)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.unprotectedSetQuota(FSDirAttrOp.java:334)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setQuota(FSDirAttrOp.java:244)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setQuota(FSNamesystem.java:3352)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setQuota(NameNodeRpcServer.java:1484)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setQuota(ClientNamenodeProtocolServerSideTranslatorPB.java:1042)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:37182)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:530)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at 
> org.apache.hadoop.ipc.Server$Call.run(Server.java:1) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2825)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14334) RBF: Use human readable format for long numbers in the Router UI

2019-03-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783813#comment-16783813
 ] 

Hadoop QA commented on HDFS-14334:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
40s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
27m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14334 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961049/HDFS-14334-HDFS-13891.000.patch
 |
| Optional Tests |  dupname  asflicense  shadedclient  |
| uname | Linux 112451834d83 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / fc8dd2e |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 448 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26399/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Use human readable format for long numbers in the Router UI
> 
>
> Key: HDFS-14334
> URL: https://issues.apache.org/jira/browse/HDFS-14334
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14334-HDFS-13891.000.patch, block-files-numbers.png
>
>
> Currently, for the number of files, we show the raw number. When it starts to 
> get into millions, it is hard to read. We should use a human readable version 
> similar to what we do with PB, GB, MB,...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14331) RBF: IOE While Removing Mount Entry

2019-03-04 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14331:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-13891
   Status: Resolved  (was: Patch Available)

> RBF: IOE While Removing Mount Entry
> ---
>
> Key: HDFS-14331
> URL: https://issues.apache.org/jira/browse/HDFS-14331
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14331-HDFS-13891-01.patch, 
> HDFS-14331-HDFS-13891-02.patch, HDFS-14331-HDFS-13891-03.patch
>
>
> IOException while trying to remove the mount entry when the actual 
> destination doesn't exist.
> {noformat}
> java.io.IOException: Directory does not exist: /mount at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.valueOf(INodeDirectory.java:59)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.unprotectedSetQuota(FSDirAttrOp.java:334)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setQuota(FSDirAttrOp.java:244)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setQuota(FSNamesystem.java:3352)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setQuota(NameNodeRpcServer.java:1484)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setQuota(ClientNamenodeProtocolServerSideTranslatorPB.java:1042)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:37182)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:530)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at 
> org.apache.hadoop.ipc.Server$Call.run(Server.java:1) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2825)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1216) Change name of ozoneManager service in docker compose files to om

2019-03-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1216?focusedWorklogId=207453=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-207453
 ]

ASF GitHub Bot logged work on HDDS-1216:


Author: ASF GitHub Bot
Created on: 04/Mar/19 21:56
Start Date: 04/Mar/19 21:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #553: HDDS-1216. 
Change name of ozoneManager service in docker compose file…
URL: https://github.com/apache/hadoop/pull/553#discussion_r262260773
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -78,7 +78,7 @@ execute_tests(){
  TITLE="Ozone $TEST tests with $COMPOSE_DIR cluster"
  set +e
  OUTPUT_NAME="$COMPOSE_DIR-${TEST//\//_}"
- docker-compose -f "$COMPOSE_FILE" exec -T ozoneManager python -m 
robot --log NONE --report NONE "${OZONE_ROBOT_OPTS[@]}" --output 
"smoketest/$RESULT_DIR/robot-$OUTPUT_NAME.xml" --logtitle "$TITLE" 
--reporttitle "$TITLE" "smoketest/$TEST"
+ docker-compose -f "$COMPOSE_FILE" exec -T om python -m robot --log 
NONE --report NONE "${OZONE_ROBOT_OPTS[@]}" --output 
"smoketest/$RESULT_DIR/robot-$OUTPUT_NAME.xml" --logtitle "$TITLE" 
--reporttitle "$TITLE" "smoketest/$TEST"
 
 Review comment:
   whitespace:tabs in line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 207453)
Time Spent: 40m  (was: 0.5h)

> Change name of ozoneManager service in docker compose files to om
> -
>
> Key: HDDS-1216
> URL: https://issues.apache.org/jira/browse/HDDS-1216
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Change name of ozoneManager service in docker compose files to om for 
> consistency. (secure ozone compose file will use "om"). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14205) Backport HDFS-6440 to branch-2

2019-03-04 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783817#comment-16783817
 ] 

Chao Sun commented on HDFS-14205:
-

Re-attach patch v6 to trigger jenkins.

> Backport HDFS-6440 to branch-2
> --
>
> Key: HDFS-14205
> URL: https://issues.apache.org/jira/browse/HDFS-14205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14205-branch-2.001.patch, 
> HDFS-14205-branch-2.002.patch, HDFS-14205-branch-2.003.patch, 
> HDFS-14205-branch-2.004.patch, HDFS-14205-branch-2.005.patch, 
> HDFS-14205-branch-2.006.patch, HDFS-14205-branch-2.007.patch
>
>
> Currently support for more than 2 NameNodes (HDFS-6440) is only in branch-3. 
> This JIRA aims to backport it to branch-2, as this is required by HDFS-12943 
> (consistent read from standby) backport to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14205) Backport HDFS-6440 to branch-2

2019-03-04 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-14205:

Attachment: HDFS-14205-branch-2.007.patch

> Backport HDFS-6440 to branch-2
> --
>
> Key: HDFS-14205
> URL: https://issues.apache.org/jira/browse/HDFS-14205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14205-branch-2.001.patch, 
> HDFS-14205-branch-2.002.patch, HDFS-14205-branch-2.003.patch, 
> HDFS-14205-branch-2.004.patch, HDFS-14205-branch-2.005.patch, 
> HDFS-14205-branch-2.006.patch, HDFS-14205-branch-2.007.patch
>
>
> Currently support for more than 2 NameNodes (HDFS-6440) is only in branch-3. 
> This JIRA aims to backport it to branch-2, as this is required by HDFS-12943 
> (consistent read from standby) backport to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14335) RBF: Fix heartbeat typos in the Router.

2019-03-04 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783806#comment-16783806
 ] 

CR Hota commented on HDFS-14335:


[~elgoiri] Thanks for the quick review. Din't realize we have different flavors 
of this. I bumped to this and was stuck for few mins since "heartbeat" was 
giving weird results.

> RBF: Fix heartbeat typos in the Router.
> ---
>
> Key: HDFS-14335
> URL: https://issues.apache.org/jira/browse/HDFS-14335
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Trivial
> Attachments: HDFS-14335-HDFS-13891.001.patch
>
>
> Found this while debugging a namenode heartbeat related issue. Weirdly search 
> for "heartbeat" wasn't giving desired results till I realized this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14331) RBF: IOE While Removing Mount Entry

2019-03-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783812#comment-16783812
 ] 

Íñigo Goiri commented on HDFS-14331:


+1 on [^HDFS-14331-HDFS-13891-03.patch].

> RBF: IOE While Removing Mount Entry
> ---
>
> Key: HDFS-14331
> URL: https://issues.apache.org/jira/browse/HDFS-14331
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14331-HDFS-13891-01.patch, 
> HDFS-14331-HDFS-13891-02.patch, HDFS-14331-HDFS-13891-03.patch
>
>
> IOException while trying to remove the mount entry when the actual 
> destination doesn't exist.
> {noformat}
> java.io.IOException: Directory does not exist: /mount at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.valueOf(INodeDirectory.java:59)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.unprotectedSetQuota(FSDirAttrOp.java:334)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setQuota(FSDirAttrOp.java:244)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setQuota(FSNamesystem.java:3352)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setQuota(NameNodeRpcServer.java:1484)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setQuota(ClientNamenodeProtocolServerSideTranslatorPB.java:1042)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:37182)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:530)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at 
> org.apache.hadoop.ipc.Server$Call.run(Server.java:1) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2825)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14326) Add CorruptFilesCount to JMX

2019-03-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783809#comment-16783809
 ] 

Íñigo Goiri commented on HDFS-14326:


Let's fix the checkstyle in the class first in HDFS-14336 and then we fix the 
one here.

> Add CorruptFilesCount to JMX
> 
>
> Key: HDFS-14326
> URL: https://issues.apache.org/jira/browse/HDFS-14326
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs, metrics, namenode
>Reporter: Danny Becker
>Assignee: Danny Becker
>Priority: Minor
> Attachments: HDFS-14326.000.patch, HDFS-14326.001.patch, 
> HDFS-14326.002.patch
>
>
> Add CorruptFilesCount to JMX



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1193) Refactor ContainerChillModeRule and DatanodeChillMode rule

2019-03-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1193?focusedWorklogId=207441=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-207441
 ]

ASF GitHub Bot logged work on HDDS-1193:


Author: ASF GitHub Bot
Created on: 04/Mar/19 21:35
Start Date: 04/Mar/19 21:35
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #534: 
HDDS-1193. Refactor ContainerChillModeRule and DatanodeChillMode rule.
URL: https://github.com/apache/hadoop/pull/534#discussion_r262253079
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/ContainerChillModeRule.java
 ##
 @@ -96,12 +92,36 @@ public void process(NodeRegistrationContainerReport 
reportsProto) {
 }
   }
 });
+  }
+
+  @Override
+  public void onMessage(NodeRegistrationContainerReport
+  nodeRegistrationContainerReport, EventPublisher publisher) {
+
+// TODO: when we have remove handlers, we can remove getInChillmode check
+if (chillModeManager.getInChillMode()) {
+  if (validate()) {
 
 Review comment:
   Updated the code with slight change.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 207441)
Time Spent: 40m  (was: 0.5h)

> Refactor ContainerChillModeRule and DatanodeChillMode rule
> --
>
> Key: HDDS-1193
> URL: https://issues.apache.org/jira/browse/HDDS-1193
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The main intention of this Jira is to have all rules look in a similar way of 
> handling events.
> In this Jira, did the following changes:
>  # Both DatanodeRule and ContainerRule implements EventHandler and listen for 
> NodeRegistrationContainerReport
>  # Update ScmChillModeManager not to handle any events. (As each rule need to 
> handle an event, and work on that rule)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14270) [SBN Read] StateId and TrasactionId not present in Trace level logging

2019-03-04 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-14270:
--
Attachment: HDFS-14270.003.patch

> [SBN Read] StateId and TrasactionId not present in Trace level logging
> --
>
> Key: HDFS-14270
> URL: https://issues.apache.org/jira/browse/HDFS-14270
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Shweta
>Assignee: Shweta
>Priority: Trivial
> Attachments: HDFS-14270.001.patch, HDFS-14270.002.patch, 
> HDFS-14270.003.patch
>
>
> While running the command "hdfs --loglevel TRACE dfs -ls /" it was seen that 
> stateId and TransactionId do not appear in the logs. 
How does one see the 
> stateId and TransactionId in the logs? Is there a different approach?
> CC: [~jojochuang], [~csun], [~shv]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14335) RBF: Fix heartbeat typos in the Router.

2019-03-04 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-14335:
---
Summary: RBF: Fix heartbeat typos in the Router.  (was: RBF: Minor refactor 
of Router class)

> RBF: Fix heartbeat typos in the Router.
> ---
>
> Key: HDFS-14335
> URL: https://issues.apache.org/jira/browse/HDFS-14335
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Trivial
> Attachments: HDFS-14335-HDFS-13891.001.patch
>
>
> Found this while debugging a namenode heartbeat related issue. Weirdly search 
> for "heartbeat" wasn't giving desired results till I realized this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14314) fullBlockReportLeaseId should be reset after registering to NN

2019-03-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783800#comment-16783800
 ] 

Hadoop QA commented on HDFS-14314:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} branch-2 passed with JDK v1.8.0_191 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} branch-2 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
56s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in branch-2 has 1 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} branch-2 passed with JDK v1.8.0_191 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed with JDK v1.8.0_191 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 52 unchanged - 1 fixed = 54 total (was 53) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed with JDK v1.8.0_191 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}117m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
|   | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:da67579 |
| JIRA Issue | HDFS-14314 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961039/HDFS-14314.branch-2.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d92d0c31742a 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 

[jira] [Commented] (HDFS-14331) RBF: IOE While Removing Mount Entry

2019-03-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783799#comment-16783799
 ] 

Hadoop QA commented on HDFS-14331:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
31s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m  
8s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14331 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961045/HDFS-14331-HDFS-13891-03.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux be81a1645a53 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / fc8dd2e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26398/testReport/ |
| Max. process+thread count | 1440 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26398/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: IOE While Removing Mount Entry
> ---
>
> Key: HDFS-14331
> 

[jira] [Work logged] (HDDS-1193) Refactor ContainerChillModeRule and DatanodeChillMode rule

2019-03-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1193?focusedWorklogId=207442=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-207442
 ]

ASF GitHub Bot logged work on HDDS-1193:


Author: ASF GitHub Bot
Created on: 04/Mar/19 21:35
Start Date: 04/Mar/19 21:35
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #534: 
HDDS-1193. Refactor ContainerChillModeRule and DatanodeChillMode rule.
URL: https://github.com/apache/hadoop/pull/534#discussion_r262253124
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/DataNodeChillModeRule.java
 ##
 @@ -62,18 +65,37 @@ public double getRegisteredDataNodes() {
 
   @Override
   public void process(NodeRegistrationContainerReport reportsProto) {
-if (requiredDns == 0) {
-  // No dn check required.
+
+registeredDnSet.add(reportsProto.getDatanodeDetails().getUuid());
+registeredDns = registeredDnSet.size();
+
+  }
+
+  @Override
+  public void onMessage(NodeRegistrationContainerReport
+  nodeRegistrationContainerReport, EventPublisher publisher) {
+// TODO: when we have remove handlers, we can remove getInChillmode check
+if (chillModeManager.getInChillMode()) {
+  if (validate()) {
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 207442)
Time Spent: 50m  (was: 40m)

> Refactor ContainerChillModeRule and DatanodeChillMode rule
> --
>
> Key: HDDS-1193
> URL: https://issues.apache.org/jira/browse/HDDS-1193
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The main intention of this Jira is to have all rules look in a similar way of 
> handling events.
> In this Jira, did the following changes:
>  # Both DatanodeRule and ContainerRule implements EventHandler and listen for 
> NodeRegistrationContainerReport
>  # Update ScmChillModeManager not to handle any events. (As each rule need to 
> handle an event, and work on that rule)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14270) [SBN Read] StateId and TrasactionId not present in Trace level logging

2019-03-04 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783796#comment-16783796
 ] 

Shweta commented on HDFS-14270:
---

Uploaded 003 for JIRA bot to run.

> [SBN Read] StateId and TrasactionId not present in Trace level logging
> --
>
> Key: HDFS-14270
> URL: https://issues.apache.org/jira/browse/HDFS-14270
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Shweta
>Assignee: Shweta
>Priority: Trivial
> Attachments: HDFS-14270.001.patch, HDFS-14270.002.patch, 
> HDFS-14270.003.patch
>
>
> While running the command "hdfs --loglevel TRACE dfs -ls /" it was seen that 
> stateId and TransactionId do not appear in the logs. 
How does one see the 
> stateId and TransactionId in the logs? Is there a different approach?
> CC: [~jojochuang], [~csun], [~shv]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14335) RBF: Minor refactor of Router class

2019-03-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783795#comment-16783795
 ] 

Íñigo Goiri commented on HDFS-14335:


I would rename the JIRA to something like: Fix heartbeat typos in the Router.
BTW, we have different variations of the typo :) 

> RBF: Minor refactor of Router class
> ---
>
> Key: HDFS-14335
> URL: https://issues.apache.org/jira/browse/HDFS-14335
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Trivial
> Attachments: HDFS-14335-HDFS-13891.001.patch
>
>
> Found this while debugging a namenode heartbeat related issue. Weirdly search 
> for "heartbeat" wasn't giving desired results till I realized this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14335) RBF: Minor refactor of Router class

2019-03-04 Thread CR Hota (JIRA)
CR Hota created HDFS-14335:
--

 Summary: RBF: Minor refactor of Router class
 Key: HDFS-14335
 URL: https://issues.apache.org/jira/browse/HDFS-14335
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: CR Hota
Assignee: CR Hota


Found this while debugging a namenode heartbeat related issue. Weirdly search 
for "heartbeat" wasn't giving desired results till I realized this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14333) Datanode fails to start if any disk has errors during Namenode registration

2019-03-04 Thread Daniel Templeton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783792#comment-16783792
 ] 

Daniel Templeton commented on HDFS-14333:
-

I took a look, and I don't have any comments.  LGTM!  I'm gonna let someone who 
knows volume management in the data node better give you the +1, though.

> Datanode fails to start if any disk has errors during Namenode registration
> ---
>
> Key: HDFS-14333
> URL: https://issues.apache.org/jira/browse/HDFS-14333
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14333.001.patch
>
>
> This is closely related to HDFS-9908, where it was reported that a datanode 
> would fail to start if an IO error occurred on a single disk when running du 
> during Datanode registration. That Jira was closed due to HADOOP-12973 which 
> refactored how du is called and prevents any exception being thrown. However 
> this problem can still occur if the volume has errors (eg permission or 
> filesystem corruption) when the disk is scanned to load all the replicas. The 
> method chain is:
> DataNode.initBlockPool -> FSDataSetImpl.addBlockPool -> 
> FSVolumeList.getAllVolumesMap -> Throws exception which goes unhandled.
> The DN logs will contain a stack trace for the problem volume, so the 
> workaround is to remove the volume from the DN config and the DN will start, 
> but the logs are a little confusing, so its always not obvious what the issue 
> is.
> These are the cut down logs from an occurrence of this issue.
> {code}
> 2019-03-01 08:58:24,830 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-240961797-x.x.x.x-1392827522027 on volume 
> /data/18/dfs/dn/current...
> ...
> 2019-03-01 08:58:27,029 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Could 
> not get disk usage information
> ExitCodeException exitCode=1: du: cannot read directory 
> `/data/18/dfs/dn/current/BP-240961797-x.x.x.x-1392827522027/current/finalized/subdir149/subdir215':
>  Permission denied
> du: cannot read directory 
> `/data/18/dfs/dn/current/BP-240961797-x.x.x.x-1392827522027/current/finalized/subdir149/subdir213':
>  Permission denied
> du: cannot read directory 
> `/data/18/dfs/dn/current/BP-240961797-x.x.x.x-1392827522027/current/finalized/subdir97/subdir25':
>  Permission denied
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:601)
>   at org.apache.hadoop.util.Shell.run(Shell.java:504)
>   at org.apache.hadoop.fs.DU$DUShell.startRefresh(DU.java:61)
>   at org.apache.hadoop.fs.DU.refresh(DU.java:53)
>   at 
> org.apache.hadoop.fs.CachingGetSpaceUsed.init(CachingGetSpaceUsed.java:84)
>   at 
> org.apache.hadoop.fs.GetSpaceUsed$Builder.build(GetSpaceUsed.java:166)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.(BlockPoolSlice.java:145)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addBlockPool(FsVolumeImpl.java:881)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$2.run(FsVolumeList.java:412)
> ...
> 2019-03-01 08:58:27,043 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time 
> taken to scan block pool BP-240961797-x.x.x.x-1392827522027 on 
> /data/18/dfs/dn/current: 2202ms
> {code}
> So we can see a du error occurred, was logged but not re-thrown (due to 
> HADOOP-12973) and the blockpool scan completed. However then in the 'add 
> replicas to map' logic, we got another exception stemming from the same 
> problem:
> {code}
> 2019-03-01 08:58:27,564 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding 
> replicas to map for block pool BP-240961797-x.x.x.x-1392827522027 on volume 
> /data/18/dfs/dn/current...
> ...
> 2019-03-01 08:58:31,155 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Caught 
> exception while adding replicas from /data/18/dfs/dn/current. Will throw 
> later.
> java.io.IOException: Invalid directory or I/O error occurred for dir: 
> /data/18/dfs/dn/current/BP-240961797-x.x.x.x-1392827522027/current/finalized/subdir149/subdir215
>   at org.apache.hadoop.fs.FileUtil.listFiles(FileUtil.java:1167)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:445)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
>   at 
> 

  1   2   3   >