[jira] [Commented] (HDFS-14169) RBF: Correct the returned value in case of IOException in NamenodeBeanMetrics#getFederationMetrics

2018-12-23 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728203#comment-16728203
 ] 

Surendra Singh Lilhore commented on HDFS-14169:
---

[~ayushtkn], Default value for metrics should be 0 not -1.

> RBF: Correct the returned value in case of IOException in 
> NamenodeBeanMetrics#getFederationMetrics
> --
>
> Key: HDFS-14169
> URL: https://issues.apache.org/jira/browse/HDFS-14169
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14169-HDFS-13891-01.patch
>
>
> Presently in case of IOException the the metrics value returned is 0 which is 
> a legal entry.Better to change to a value which could indicate that the value 
> hasn't been actually fetched.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14167) RBF: Add stale nodes to federation metrics

2018-12-23 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728202#comment-16728202
 ] 

Surendra Singh Lilhore commented on HDFS-14167:
---

{quote}For the -1 issue we may want to start a new JIRA.
{quote}
No need, metrics is count for system status. It should not return -1. If 
nothing is there just return 0.

Many system will use metrics to create graph and display the system status. Pls 
check HDFS-8932, in exception case it will return default 0;

> RBF: Add stale nodes to federation metrics
> --
>
> Key: HDFS-14167
> URL: https://issues.apache.org/jira/browse/HDFS-14167
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14167-HDFS-13891.000.patch
>
>
> The federation metrics mimic the Namenode FSNamesystemState. However, the 
> stale datanodes are not collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14171) Performance improvement in Tailing EditLog

2018-12-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728167#comment-16728167
 ] 

Hadoop QA commented on HDFS-14171:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14171 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952967/HDFS-14171.000.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 75be475182ca 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 26e4be7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25858/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25858/testReport/ |
| Max. process+thread count | 4782 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-14046) In-Maintenance ICON is missing in datanode info page

2018-12-23 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728158#comment-16728158
 ] 

Hudson commented on HDFS-14046:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15661 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15661/])
HDFS-14046. In-Maintenance ICON is missing in datanode info page. 
(surendralilhore: rev 686fcd4db34dfe8642ff4b25fffbc73e42217f30)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html


> In-Maintenance ICON is missing in datanode info page
> 
>
> Key: HDFS-14046
> URL: https://issues.apache.org/jira/browse/HDFS-14046
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.1.2, 3.3.0, 3.2.1
>
> Attachments: HDFS-14046.001.patch, HDFS-14046.002.patch, 
> IMG_20181222_123726.jpg
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14046) In-Maintenance ICON is missing in datanode info page

2018-12-23 Thread Surendra Singh Lilhore (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-14046:
--
   Resolution: Fixed
Fix Version/s: 3.2.1
   3.3.0
   3.1.2
   Status: Resolved  (was: Patch Available)

Thanks [~RANith] for contribution

Committed to trunk, branch-3.2 and branch-3.1!

> In-Maintenance ICON is missing in datanode info page
> 
>
> Key: HDFS-14046
> URL: https://issues.apache.org/jira/browse/HDFS-14046
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.1.2, 3.3.0, 3.2.1
>
> Attachments: HDFS-14046.001.patch, HDFS-14046.002.patch, 
> IMG_20181222_123726.jpg
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14046) In-Maintenance ICON is missing in datanode info page

2018-12-23 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728152#comment-16728152
 ] 

Surendra Singh Lilhore commented on HDFS-14046:
---

+1 for v2 patch..

Committing this shortly!

> In-Maintenance ICON is missing in datanode info page
> 
>
> Key: HDFS-14046
> URL: https://issues.apache.org/jira/browse/HDFS-14046
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14046.001.patch, HDFS-14046.002.patch, 
> IMG_20181222_123726.jpg
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14046) In-Maintenance ICON is missing in datanode info page

2018-12-23 Thread Surendra Singh Lilhore (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-14046:
--
Summary: In-Maintenance ICON is missing in datanode info page  (was: There 
is no ICON for Maintenance In Datanode UI page and after Datanode moved into 
Maintenance  states still datanode mark status is empty in Datanode UI.)

> In-Maintenance ICON is missing in datanode info page
> 
>
> Key: HDFS-14046
> URL: https://issues.apache.org/jira/browse/HDFS-14046
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14046.001.patch, HDFS-14046.002.patch, 
> IMG_20181222_123726.jpg
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13856) RBF: RouterAdmin should support dfsrouteradmin -refresh command

2018-12-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728137#comment-16728137
 ] 

Hadoop QA commented on HDFS-13856:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
17s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
23s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13856 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952963/HDFS-13856-HDFS-13891.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 14ce10d7d608 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / c9ebaf2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25857/testReport/ |
| Max. process+thread count | 1432 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25857/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: RouterAdmin should support dfsrouteradmin -refresh command
> 

[jira] [Reopened] (HDFS-14171) Performance improvement in Tailing EditLog

2018-12-23 Thread Kenneth Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Yang reopened HDFS-14171:
-

> Performance improvement in Tailing EditLog
> --
>
> Key: HDFS-14171
> URL: https://issues.apache.org/jira/browse/HDFS-14171
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Kenneth Yang
>Priority: Minor
> Attachments: HDFS-14171.000.patch
>
>
> Stack:
> {code:java}
> Thread 456 (Edit log tailer):
> State: RUNNABLE
> Blocked count: 1139
> Waited count: 12
> Stack:
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getNumLiveDataNodes(DatanodeManager.java:1259)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.areThresholdsMet(BlockManagerSafeMode.java:570)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.checkSafeMode(BlockManagerSafeMode.java:213)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.adjustBlockTotals(BlockManagerSafeMode.java:265)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.completeBlock(BlockManager.java:1087)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.forceCompleteBlock(BlockManager.java:1118)
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.updateBlocks(FSEditLogLoader.java:1126)
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:468)
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:258)
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:161)
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:892)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:321)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:414)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> Thread 455 (pool-16-thread-1):
> {code}
> code:
> {code:java}
> private boolean areThresholdsMet() {
>   assert namesystem.hasWriteLock();
>   int datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
>   synchronized (this) {
> return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>   }
> }
> {code}
> According to the code, each time the method areThresholdsMet() is called, the 
> value of {color:#ff}datanodeNum{color} is need to be calculated.  
> However, in the scenario of {color:#ff}datanodeThreshold{color} is equal 
> to 0(0 is the default value of the configuration), This expression 
> datanodeNum >= datanodeThreshold always returns true.
> Calling the method {color:#ff}getNumLiveDataNodes(){color} is time 
> consuming at a scale of 10,000 datanode clusters. Therefore, we add the 
> judgment condition, and only when the datanodeThreshold is greater than 0, 
> the datanodeNum is calculated, which improves the perfomance greatly.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14171) Performance improvement in Tailing EditLog

2018-12-23 Thread Kenneth Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Yang updated HDFS-14171:

Resolution: Works for Me
Status: Resolved  (was: Patch Available)

> Performance improvement in Tailing EditLog
> --
>
> Key: HDFS-14171
> URL: https://issues.apache.org/jira/browse/HDFS-14171
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Kenneth Yang
>Priority: Minor
> Attachments: HDFS-14171.000.patch
>
>
> Stack:
> {code:java}
> Thread 456 (Edit log tailer):
> State: RUNNABLE
> Blocked count: 1139
> Waited count: 12
> Stack:
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getNumLiveDataNodes(DatanodeManager.java:1259)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.areThresholdsMet(BlockManagerSafeMode.java:570)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.checkSafeMode(BlockManagerSafeMode.java:213)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.adjustBlockTotals(BlockManagerSafeMode.java:265)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.completeBlock(BlockManager.java:1087)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.forceCompleteBlock(BlockManager.java:1118)
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.updateBlocks(FSEditLogLoader.java:1126)
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:468)
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:258)
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:161)
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:892)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:321)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:414)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> Thread 455 (pool-16-thread-1):
> {code}
> code:
> {code:java}
> private boolean areThresholdsMet() {
>   assert namesystem.hasWriteLock();
>   int datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
>   synchronized (this) {
> return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>   }
> }
> {code}
> According to the code, each time the method areThresholdsMet() is called, the 
> value of {color:#ff}datanodeNum{color} is need to be calculated.  
> However, in the scenario of {color:#ff}datanodeThreshold{color} is equal 
> to 0(0 is the default value of the configuration), This expression 
> datanodeNum >= datanodeThreshold always returns true.
> Calling the method {color:#ff}getNumLiveDataNodes(){color} is time 
> consuming at a scale of 10,000 datanode clusters. Therefore, we add the 
> judgment condition, and only when the datanodeThreshold is greater than 0, 
> the datanodeNum is calculated, which improves the perfomance greatly.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14171) Performance improvement in Tailing EditLog

2018-12-23 Thread Kenneth Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Yang updated HDFS-14171:

Status: Patch Available  (was: Reopened)

> Performance improvement in Tailing EditLog
> --
>
> Key: HDFS-14171
> URL: https://issues.apache.org/jira/browse/HDFS-14171
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha1, 2.9.0
>Reporter: Kenneth Yang
>Priority: Minor
> Attachments: HDFS-14171.000.patch
>
>
> Stack:
> {code:java}
> Thread 456 (Edit log tailer):
> State: RUNNABLE
> Blocked count: 1139
> Waited count: 12
> Stack:
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getNumLiveDataNodes(DatanodeManager.java:1259)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.areThresholdsMet(BlockManagerSafeMode.java:570)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.checkSafeMode(BlockManagerSafeMode.java:213)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.adjustBlockTotals(BlockManagerSafeMode.java:265)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.completeBlock(BlockManager.java:1087)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.forceCompleteBlock(BlockManager.java:1118)
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.updateBlocks(FSEditLogLoader.java:1126)
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:468)
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:258)
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:161)
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:892)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:321)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:414)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> Thread 455 (pool-16-thread-1):
> {code}
> code:
> {code:java}
> private boolean areThresholdsMet() {
>   assert namesystem.hasWriteLock();
>   int datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
>   synchronized (this) {
> return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>   }
> }
> {code}
> According to the code, each time the method areThresholdsMet() is called, the 
> value of {color:#ff}datanodeNum{color} is need to be calculated.  
> However, in the scenario of {color:#ff}datanodeThreshold{color} is equal 
> to 0(0 is the default value of the configuration), This expression 
> datanodeNum >= datanodeThreshold always returns true.
> Calling the method {color:#ff}getNumLiveDataNodes(){color} is time 
> consuming at a scale of 10,000 datanode clusters. Therefore, we add the 
> judgment condition, and only when the datanodeThreshold is greater than 0, 
> the datanodeNum is calculated, which improves the perfomance greatly.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14171) Performance improvement in Tailing EditLog

2018-12-23 Thread Kenneth Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Yang updated HDFS-14171:

Description: 
Stack:
{code:java}
Thread 456 (Edit log tailer):
State: RUNNABLE
Blocked count: 1139
Waited count: 12
Stack:
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getNumLiveDataNodes(DatanodeManager.java:1259)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.areThresholdsMet(BlockManagerSafeMode.java:570)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.checkSafeMode(BlockManagerSafeMode.java:213)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.adjustBlockTotals(BlockManagerSafeMode.java:265)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.completeBlock(BlockManager.java:1087)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.forceCompleteBlock(BlockManager.java:1118)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.updateBlocks(FSEditLogLoader.java:1126)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:468)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:258)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:161)
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:892)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:321)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:414)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
Thread 455 (pool-16-thread-1):


{code}
code:
{code:java}
private boolean areThresholdsMet() {
  assert namesystem.hasWriteLock();
  int datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
  synchronized (this) {
return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
  }
}
{code}
According to the code, each time the method areThresholdsMet() is called, the 
value of {color:#ff}datanodeNum{color} is need to be calculated.  However, 
in the scenario of {color:#ff}datanodeThreshold{color} is equal to 0(0 is 
the default value of the configuration), This expression datanodeNum >= 
datanodeThreshold always returns true.

Calling the method {color:#ff}getNumLiveDataNodes(){color} is time 
consuming at a scale of 10,000 datanode clusters. Therefore, we add the 
judgment condition, and only when the datanodeThreshold is greater than 0, the 
datanodeNum is calculated, which improves the perfomance greatly.

 

 

  was:
Stack:
{code:java}
Thread 456 (Edit log tailer):
State: RUNNABLE
Blocked count: 1139
Waited count: 12
Stack:
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getNumLiveDataNodes(DatanodeManager.java:1259)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.areThresholdsMet(BlockManagerSafeMode.java:570)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.checkSafeMode(BlockManagerSafeMode.java:213)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.adjustBlockTotals(BlockManagerSafeMode.java:265)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.completeBlock(BlockManager.java:1087)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.forceCompleteBlock(BlockManager.java:1118)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.updateBlocks(FSEditLogLoader.java:1126)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:468)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:258)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:161)
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:892)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:321)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:414)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
Thread 455 (pool-16-thread-1):


{code}
code:
{code:java}
private boolean areThresholdsMet() {
  assert namesystem.hasWriteLock();
  int datanodeNum = 

[jira] [Updated] (HDFS-14171) Performance improvement in Tailing EditLog

2018-12-23 Thread Kenneth Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Yang updated HDFS-14171:

Status: Patch Available  (was: Open)

> Performance improvement in Tailing EditLog
> --
>
> Key: HDFS-14171
> URL: https://issues.apache.org/jira/browse/HDFS-14171
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha1, 2.9.0
>Reporter: Kenneth Yang
>Priority: Minor
> Attachments: HDFS-14171.000.patch
>
>
> Stack:
> {code:java}
> Thread 456 (Edit log tailer):
> State: RUNNABLE
> Blocked count: 1139
> Waited count: 12
> Stack:
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getNumLiveDataNodes(DatanodeManager.java:1259)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.areThresholdsMet(BlockManagerSafeMode.java:570)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.checkSafeMode(BlockManagerSafeMode.java:213)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.adjustBlockTotals(BlockManagerSafeMode.java:265)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.completeBlock(BlockManager.java:1087)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.forceCompleteBlock(BlockManager.java:1118)
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.updateBlocks(FSEditLogLoader.java:1126)
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:468)
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:258)
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:161)
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:892)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:321)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:414)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> Thread 455 (pool-16-thread-1):
> {code}
> code:
> {code:java}
> private boolean areThresholdsMet() {
>   assert namesystem.hasWriteLock();
>   int datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
>   synchronized (this) {
> return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>   }
> }
> {code}
> According to the code, each time the method areThresholdsMet() is called, the 
> value of {color:#ff}dataNodeNum{color} is need to be calculated.  
> However, in the scenario of {color:#ff}datanodeThreshold{color} is equal 
> to 0(0 is the default value of the configuration), This expression 
> datanodeNum >= datanodeThreshold always returns true.
> Calling the method {color:#ff}getNumLiveDataNodes(){color} is time 
> consuming at a scale of 10,000 datanode clusters. Therefore, we add the 
> judgment condition, and only when the datanodeThreshold is greater than 0, 
> the datanodeNum is calculated, which improves the perfomance greatly.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14171) Performance improvement in Tailing EditLog

2018-12-23 Thread Kenneth Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Yang updated HDFS-14171:

Description: 
Stack:
{code:java}
Thread 456 (Edit log tailer):
State: RUNNABLE
Blocked count: 1139
Waited count: 12
Stack:
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getNumLiveDataNodes(DatanodeManager.java:1259)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.areThresholdsMet(BlockManagerSafeMode.java:570)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.checkSafeMode(BlockManagerSafeMode.java:213)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.adjustBlockTotals(BlockManagerSafeMode.java:265)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.completeBlock(BlockManager.java:1087)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.forceCompleteBlock(BlockManager.java:1118)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.updateBlocks(FSEditLogLoader.java:1126)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:468)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:258)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:161)
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:892)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:321)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:414)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
Thread 455 (pool-16-thread-1):


{code}
code:
{code:java}
private boolean areThresholdsMet() {
  assert namesystem.hasWriteLock();
  int datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
  synchronized (this) {
return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
  }
}
{code}
According to the code, each time the method areThresholdsMet() is called, the 
value of {color:#ff}dataNodeNum{color} is need to be calculated.  However, 
in the scenario of {color:#ff}datanodeThreshold{color} is equal to 0(0 is 
the default value of the configuration), This expression datanodeNum >= 
datanodeThreshold always returns true.

Calling the method {color:#ff}getNumLiveDataNodes(){color} is time 
consuming at a scale of 10,000 datanode clusters. Therefore, we add the 
judgment condition, and only when the datanodeThreshold is greater than 0, the 
datanodeNum is calculated, which improves the perfomance greatly.

 

 

  was:
 

stack:
{code:java}
Thread 456 (Edit log tailer):
State: RUNNABLE
Blocked count: 1139
Waited count: 12
Stack:
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getNumLiveDataNodes(DatanodeManager.java:1259)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.areThresholdsMet(BlockManagerSafeMode.java:570)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.checkSafeMode(BlockManagerSafeMode.java:213)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.adjustBlockTotals(BlockManagerSafeMode.java:265)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.completeBlock(BlockManager.java:1087)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.forceCompleteBlock(BlockManager.java:1118)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.updateBlocks(FSEditLogLoader.java:1126)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:468)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:258)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:161)
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:892)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:321)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:414)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
Thread 455 (pool-16-thread-1):


{code}
code:
{code:java}
private boolean areThresholdsMet() {
  assert namesystem.hasWriteLock();
  int datanodeNum = 

[jira] [Updated] (HDFS-14171) Performance improvement in Tailing EditLog

2018-12-23 Thread Kenneth Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Yang updated HDFS-14171:

Description: 
 

stack:
{code:java}
Thread 456 (Edit log tailer):
State: RUNNABLE
Blocked count: 1139
Waited count: 12
Stack:
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getNumLiveDataNodes(DatanodeManager.java:1259)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.areThresholdsMet(BlockManagerSafeMode.java:570)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.checkSafeMode(BlockManagerSafeMode.java:213)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.adjustBlockTotals(BlockManagerSafeMode.java:265)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.completeBlock(BlockManager.java:1087)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.forceCompleteBlock(BlockManager.java:1118)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.updateBlocks(FSEditLogLoader.java:1126)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:468)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:258)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:161)
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:892)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:321)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:414)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
Thread 455 (pool-16-thread-1):


{code}
code:
{code:java}
private boolean areThresholdsMet() {
  assert namesystem.hasWriteLock();
  int datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
  synchronized (this) {
return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
  }
}
{code}
According to the code, each time the method areThresholdsMet() is called, the 
value of {color:#ff}dataNodeNum{color} is need to be calculated.  However, 
in the scenario of {color:#ff}datanodeThreshold{color} is equal to 0(0 is 
the default value of the configuration), This expression datanodeNum >= 
datanodeThreshold always returns true.

Calling the method {color:#ff}getNumLiveDataNodes(){color} is time 
consuming at a scale of 10,000 datanode clusters. Therefore, we add the 
judgment condition, and only when the datanodeThreshold is greater than 0, the 
datanodeNum is calculated, which improves the perfomance greatly.

 

 

  was:
 

stack:
{code:java}
Thread 456 (Edit log tailer):
State: RUNNABLE
Blocked count: 1139
Waited count: 12
Stack:
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getNumLiveDataNodes(DatanodeManager.java:1259)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.areThresholdsMet(BlockManagerSafeMode.java:570)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.checkSafeMode(BlockManagerSafeMode.java:213)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.adjustBlockTotals(BlockManagerSafeMode.java:265)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.completeBlock(BlockManager.java:1087)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.forceCompleteBlock(BlockManager.java:1118)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.updateBlocks(FSEditLogLoader.java:1126)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:468)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:258)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:161)
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:892)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:321)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:414)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
Thread 455 (pool-16-thread-1):


{code}
code:
{code:java}
private boolean areThresholdsMet() {
  assert namesystem.hasWriteLock();
  int datanodeNum = 

[jira] [Updated] (HDFS-14171) Performance improvement in Tailing EditLog

2018-12-23 Thread Kenneth Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Yang updated HDFS-14171:

Attachment: HDFS-14171.000.patch

> Performance improvement in Tailing EditLog
> --
>
> Key: HDFS-14171
> URL: https://issues.apache.org/jira/browse/HDFS-14171
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Kenneth Yang
>Priority: Minor
> Attachments: HDFS-14171.000.patch
>
>
>  
> stack:
> {code:java}
> Thread 456 (Edit log tailer):
> State: RUNNABLE
> Blocked count: 1139
> Waited count: 12
> Stack:
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getNumLiveDataNodes(DatanodeManager.java:1259)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.areThresholdsMet(BlockManagerSafeMode.java:570)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.checkSafeMode(BlockManagerSafeMode.java:213)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.adjustBlockTotals(BlockManagerSafeMode.java:265)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.completeBlock(BlockManager.java:1087)
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.forceCompleteBlock(BlockManager.java:1118)
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.updateBlocks(FSEditLogLoader.java:1126)
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:468)
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:258)
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:161)
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:892)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:321)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:414)
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> Thread 455 (pool-16-thread-1):
> {code}
> code:
> {code:java}
> private boolean areThresholdsMet() {
>   assert namesystem.hasWriteLock();
>   int datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
>   synchronized (this) {
> return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>   }
> }
> {code}
> According to the code, each time the method areThresholdsMet() is called, the 
> value of {color:#FF}dataNodeNum{color} is need to be calculated.  
> However, in the scenario of {color:#FF}datanodeThreshold{color} is equal 
> to 0(0 is the default value of the configuration), This expression 
> datanodeNum >= datanodeThreshold always returns true. Calling the method 
> {color:#FF}getNumLiveDataNodes(){color} is time consuming at a scale of 
> 10,000 datanode clusters. Therefore, we add the judgment condition, and only 
> when the datanodeThreshold is greater than 0, the datanodeNum is calculated, 
> which improves the perfomance greatly.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14167) RBF: Add stale nodes to federation metrics

2018-12-23 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728113#comment-16728113
 ] 

Ayush Saxena commented on HDFS-14167:
-

Raised HDFS-14169 for the -1 issue.

> RBF: Add stale nodes to federation metrics
> --
>
> Key: HDFS-14167
> URL: https://issues.apache.org/jira/browse/HDFS-14167
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14167-HDFS-13891.000.patch
>
>
> The federation metrics mimic the Namenode FSNamesystemState. However, the 
> stale datanodes are not collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14161) RBF: Throw StandbyException instead of IOException so that client can retry when can not get connection

2018-12-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728111#comment-16728111
 ] 

Hadoop QA commented on HDFS-14161:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
29s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
36s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14161 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952962/HDFS-14161-HDFS-13891.005.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ec6cfc9b7776 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / c9ebaf2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25856/testReport/ |
| Max. process+thread count | 1517 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25856/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Throw StandbyException instead of IOException so that client can retry 
> when can not get connection
> 

[jira] [Created] (HDFS-14171) Performance improvement in Tailing EditLog

2018-12-23 Thread Kenneh Yang (JIRA)
Kenneh Yang created HDFS-14171:
--

 Summary: Performance improvement in Tailing EditLog
 Key: HDFS-14171
 URL: https://issues.apache.org/jira/browse/HDFS-14171
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0-alpha1, 2.9.0
Reporter: Kenneh Yang


 

stack:
{code:java}
Thread 456 (Edit log tailer):
State: RUNNABLE
Blocked count: 1139
Waited count: 12
Stack:
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getNumLiveDataNodes(DatanodeManager.java:1259)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.areThresholdsMet(BlockManagerSafeMode.java:570)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.checkSafeMode(BlockManagerSafeMode.java:213)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode.adjustBlockTotals(BlockManagerSafeMode.java:265)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.completeBlock(BlockManager.java:1087)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.forceCompleteBlock(BlockManager.java:1118)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.updateBlocks(FSEditLogLoader.java:1126)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:468)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:258)
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:161)
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:892)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:321)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:414)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
Thread 455 (pool-16-thread-1):


{code}
code:
{code:java}
private boolean areThresholdsMet() {
  assert namesystem.hasWriteLock();
  int datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
  synchronized (this) {
return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
  }
}
{code}
According to the code, each time the method areThresholdsMet() is called, the 
value of {color:#FF}dataNodeNum{color} is need to be calculated.  However, 
in the scenario of {color:#FF}datanodeThreshold{color} is equal to 0(0 is 
the default value of the configuration), This expression datanodeNum >= 
datanodeThreshold always returns true. Calling the method 
{color:#FF}getNumLiveDataNodes(){color} is time consuming at a scale of 
10,000 datanode clusters. Therefore, we add the judgment condition, and only 
when the datanodeThreshold is greater than 0, the datanodeNum is calculated, 
which improves the perfomance greatly.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13856) RBF: RouterAdmin should support dfsrouteradmin -refresh command

2018-12-23 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728107#comment-16728107
 ] 

Fei Hui edited comment on HDFS-13856 at 12/24/18 3:04 AM:
--

[~elgoiri] Upload v002 patch
* remove the extra line in the imports of RouterAdmin.
* remove the semi colon In RouterAdmin#970
* void last line change in RouterAdmin
* Add this to the documentation


{quote}
Kind of weird the way genericRefresh shows the error in the finally. One could 
also do the returns directly without storing it in returnCode.
{quote}
I found that it 's the same as DFSAdmin.java. I guess this feature is from 
DFSAdmin.



was (Author: ferhui):
[~elgoiri] Upload v002 patch
* remove the extra line in the imports of RouterAdmin.
* remove the semi colon In RouterAdmin#970
* void last line change in RouterAdmin
* Add this to the documentation

{quote}
Kind of weird the way genericRefresh shows the error in the finally. One could 
also do the returns directly without storing it in returnCode.
{quote}
I found that it 's the same as DFSAdmin.java. I guess this feature is from 
DFSAdmin.


> RBF: RouterAdmin should support dfsrouteradmin -refresh command
> ---
>
> Key: HDFS-13856
> URL: https://issues.apache.org/jira/browse/HDFS-13856
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.1.0, 2.9.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13856-HDFS-13891.001.patch, 
> HDFS-13856-HDFS-13891.002.patch, HDFS-13856.001.patch, HDFS-13856.002.patch
>
>
> Like namenode router should support refresh policy individually. For example, 
> we have implemented simple password authentication per rpc connection. The 
> password dict can be refreshed by generic refresh policy. We also want to 
> support this in RouterAdminServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13856) RBF: RouterAdmin should support dfsrouteradmin -refresh command

2018-12-23 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728107#comment-16728107
 ] 

Fei Hui edited comment on HDFS-13856 at 12/24/18 3:03 AM:
--

[~elgoiri] Upload v002 patch
* remove the extra line in the imports of RouterAdmin.
* remove the semi colon In RouterAdmin#970
* void last line change in RouterAdmin
* Add this to the documentation

{quote}
Kind of weird the way genericRefresh shows the error in the finally. One could 
also do the returns directly without storing it in returnCode.
{quote}
I found that it 's the same as DFSAdmin.java. I guess this feature is from 
DFSAdmin.



was (Author: ferhui):
[~elgoiri] Upload v002 patch
* remove the extra line in the imports of RouterAdmin.
* remove the semi colon In RouterAdmin#970
* void last line change in RouterAdmin
* Add this to the documentation
{quote}
Kind of weird the way genericRefresh shows the error in the finally. One could 
also do the returns directly without storing it in returnCode.
{quote}
I found that it 's the same as DFSAdmin.java. I guess this feature is from 
DFSAdmin.


> RBF: RouterAdmin should support dfsrouteradmin -refresh command
> ---
>
> Key: HDFS-13856
> URL: https://issues.apache.org/jira/browse/HDFS-13856
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.1.0, 2.9.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13856-HDFS-13891.001.patch, 
> HDFS-13856-HDFS-13891.002.patch, HDFS-13856.001.patch, HDFS-13856.002.patch
>
>
> Like namenode router should support refresh policy individually. For example, 
> we have implemented simple password authentication per rpc connection. The 
> password dict can be refreshed by generic refresh policy. We also want to 
> support this in RouterAdminServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13856) RBF: RouterAdmin should support dfsrouteradmin -refresh command

2018-12-23 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728107#comment-16728107
 ] 

Fei Hui commented on HDFS-13856:


[~elgoiri] Upload v002 patch
* remove the extra line in the imports of RouterAdmin.
* remove the semi colon In RouterAdmin#970
* void last line change in RouterAdmin
* Add this to the documentation
{quote}
Kind of weird the way genericRefresh shows the error in the finally. One could 
also do the returns directly without storing it in returnCode.
{quote}
I found that it 's the same as DFSAdmin.java. I guess this feature is from 
DFSAdmin.


> RBF: RouterAdmin should support dfsrouteradmin -refresh command
> ---
>
> Key: HDFS-13856
> URL: https://issues.apache.org/jira/browse/HDFS-13856
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.1.0, 2.9.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13856-HDFS-13891.001.patch, 
> HDFS-13856-HDFS-13891.002.patch, HDFS-13856.001.patch, HDFS-13856.002.patch
>
>
> Like namenode router should support refresh policy individually. For example, 
> we have implemented simple password authentication per rpc connection. The 
> password dict can be refreshed by generic refresh policy. We also want to 
> support this in RouterAdminServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13856) RBF: RouterAdmin should support dfsrouteradmin -refresh command

2018-12-23 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-13856:
---
Attachment: HDFS-13856-HDFS-13891.002.patch

> RBF: RouterAdmin should support dfsrouteradmin -refresh command
> ---
>
> Key: HDFS-13856
> URL: https://issues.apache.org/jira/browse/HDFS-13856
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.1.0, 2.9.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13856-HDFS-13891.001.patch, 
> HDFS-13856-HDFS-13891.002.patch, HDFS-13856.001.patch, HDFS-13856.002.patch
>
>
> Like namenode router should support refresh policy individually. For example, 
> we have implemented simple password authentication per rpc connection. The 
> password dict can be refreshed by generic refresh policy. We also want to 
> support this in RouterAdminServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-921) Add JVM pause monitor to Ozone Daemons (OM, SCM and Datanodes)

2018-12-23 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728096#comment-16728096
 ] 

Hudson commented on HDDS-921:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15660 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15660/])
HDDS-921. Add JVM pause monitor to Ozone Daemons (OM, SCM and (bharat: rev 
26e4be7022626c6e814570b455dbf5bbdf410d61)
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java


> Add JVM pause monitor to Ozone Daemons (OM, SCM and Datanodes)
> --
>
> Key: HDDS-921
> URL: https://issues.apache.org/jira/browse/HDDS-921
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-921.00.patch
>
>
> Currently JVM pause monitor is not added to Ozone daemons like OM, SCM and 
> Ozone Datanodes. This jira proposes to add JCM pause monitor to these daemons.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14161) RBF: Throw StandbyException instead of IOException so that client can retry when can not get connection

2018-12-23 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14161:
---
Attachment: HDFS-14161-HDFS-13891.005.patch

> RBF: Throw StandbyException instead of IOException so that client can retry 
> when can not get connection
> ---
>
> Key: HDFS-14161
> URL: https://issues.apache.org/jira/browse/HDFS-14161
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.1.1, 2.9.2, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14161-HDFS-13891.001.patch, 
> HDFS-14161-HDFS-13891.002.patch, HDFS-14161-HDFS-13891.003.patch, 
> HDFS-14161-HDFS-13891.004.patch, HDFS-14161-HDFS-13891.005.patch, 
> HDFS-14161.001.patch
>
>
> Hive Client may hang when get IOException, stack follows
> {code:java}
> Exception in thread "Thread-150" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot get a 
> connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:554)
>   at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:74)
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot 
> get a connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1503)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1441)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>   at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:775)
>   at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at 

[jira] [Commented] (HDFS-14161) RBF: Throw StandbyException instead of IOException so that client can retry when can not get connection

2018-12-23 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728095#comment-16728095
 ] 

Fei Hui commented on HDFS-14161:


[~elgoiri] Upload v005 patch
#  add a space after the colon on the standby log message
# Throw StandbyException when catch ConnectionNullException and log details

> RBF: Throw StandbyException instead of IOException so that client can retry 
> when can not get connection
> ---
>
> Key: HDFS-14161
> URL: https://issues.apache.org/jira/browse/HDFS-14161
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.1.1, 2.9.2, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14161-HDFS-13891.001.patch, 
> HDFS-14161-HDFS-13891.002.patch, HDFS-14161-HDFS-13891.003.patch, 
> HDFS-14161-HDFS-13891.004.patch, HDFS-14161.001.patch
>
>
> Hive Client may hang when get IOException, stack follows
> {code:java}
> Exception in thread "Thread-150" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot get a 
> connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:554)
>   at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:74)
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot 
> get a connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1503)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1441)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>   at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:775)
>   at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>   at 
> 

[jira] [Updated] (HDDS-921) Add JVM pause monitor to Ozone Daemons (OM, SCM and Datanodes)

2018-12-23 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-921:

   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

I have committed this to trunk.

> Add JVM pause monitor to Ozone Daemons (OM, SCM and Datanodes)
> --
>
> Key: HDDS-921
> URL: https://issues.apache.org/jira/browse/HDDS-921
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-921.00.patch
>
>
> Currently JVM pause monitor is not added to Ozone daemons like OM, SCM and 
> Ozone Datanodes. This jira proposes to add JCM pause monitor to these daemons.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-921) Add JVM pause monitor to Ozone Daemons (OM, SCM and Datanodes)

2018-12-23 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728093#comment-16728093
 ] 

Bharat Viswanadham commented on HDDS-921:
-

Thank You [~xyao] for the review.

I will commit this shortly.

> Add JVM pause monitor to Ozone Daemons (OM, SCM and Datanodes)
> --
>
> Key: HDDS-921
> URL: https://issues.apache.org/jira/browse/HDDS-921
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-921.00.patch
>
>
> Currently JVM pause monitor is not added to Ozone daemons like OM, SCM and 
> Ozone Datanodes. This jira proposes to add JCM pause monitor to these daemons.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14170) Fix white spaces related to SBN reads.

2018-12-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728069#comment-16728069
 ] 

Hadoop QA commented on HDFS-14170:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
5s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
18s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
51s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 3s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
9s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
28s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 3s{color} | {color:green} root: The patch generated 0 new + 183 unchanged - 3 
fixed = 183 total (was 186) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
7s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
56s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14170 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952959/HDFS-14170-HDFS-12943.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1a80c3de34b6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-12943 / f7072e0 |
| maven | version: 

[jira] [Commented] (HDDS-921) Add JVM pause monitor to Ozone Daemons (OM, SCM and Datanodes)

2018-12-23 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728068#comment-16728068
 ] 

Xiaoyu Yao commented on HDDS-921:
-

Thanks [~bharatviswa] for woking on this. Patch LGTM, +1.

> Add JVM pause monitor to Ozone Daemons (OM, SCM and Datanodes)
> --
>
> Key: HDDS-921
> URL: https://issues.apache.org/jira/browse/HDDS-921
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-921.00.patch
>
>
> Currently JVM pause monitor is not added to Ozone daemons like OM, SCM and 
> Ozone Datanodes. This jira proposes to add JCM pause monitor to these daemons.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-102) SCM CA: SCM CA server signs certificate for approved CSR

2018-12-23 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-102:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~anu] for the contribution and all for the reviews. I've commit the 
patch to the feature branch. 

> SCM CA: SCM CA server signs certificate for approved CSR
> 
>
> Key: HDDS-102
> URL: https://issues.apache.org/jira/browse/HDDS-102
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-102-HDDS-4.001.patch, HDDS-102-HDDS-4.001.patch, 
> HDDS-102-HDDS-4.002.patch, HDDS-102-HDDS-4.003.patch, 
> HDDS-102-HDDS-4.004.patch, HDDS-102-HDDS-4.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14170) Fix white spaces related to SBN reads.

2018-12-23 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-14170:
---
Status: Patch Available  (was: Open)

Javadoc and white spaces changes only.

> Fix white spaces related to SBN reads.
> --
>
> Key: HDFS-14170
> URL: https://issues.apache.org/jira/browse/HDFS-14170
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Attachments: HDFS-14170-HDFS-12943.001.patch
>
>
> This is to fix some checkstyle warnings, mostly white spaces before merging 
> HDFS-12943 branch to trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14170) Fix white spaces related to SBN reads.

2018-12-23 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-14170:
---
Attachment: HDFS-14170-HDFS-12943.001.patch

> Fix white spaces related to SBN reads.
> --
>
> Key: HDFS-14170
> URL: https://issues.apache.org/jira/browse/HDFS-14170
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Attachments: HDFS-14170-HDFS-12943.001.patch
>
>
> This is to fix some checkstyle warnings, mostly white spaces before merging 
> HDFS-12943 branch to trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14170) Fix white spaces related to SBN reads.

2018-12-23 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-14170:
--

 Summary: Fix white spaces related to SBN reads.
 Key: HDFS-14170
 URL: https://issues.apache.org/jira/browse/HDFS-14170
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko


This is to fix some checkstyle warnings, mostly white spaces before merging 
HDFS-12943 branch to trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14169) RBF: Correct the returned value in case of IOException in NamenodeBeanMetrics#getFederationMetrics

2018-12-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728005#comment-16728005
 ] 

Hadoop QA commented on HDFS-14169:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
20s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m  
4s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14169 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952943/HDFS-14169-HDFS-13891-01.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3078d45af4dc 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / c9ebaf2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25854/testReport/ |
| Max. process+thread count | 1374 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25854/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Correct the returned value in case of IOException in 
> NamenodeBeanMetrics#getFederationMetrics
> 

[jira] [Created] (HDFS-14169) RBF: Correct the returned value in case of IOException in NamenodeBeanMetrics#getFederationMetrics

2018-12-23 Thread Ayush Saxena (JIRA)
Ayush Saxena created HDFS-14169:
---

 Summary: RBF: Correct the returned value in case of IOException in 
NamenodeBeanMetrics#getFederationMetrics
 Key: HDFS-14169
 URL: https://issues.apache.org/jira/browse/HDFS-14169
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ayush Saxena
Assignee: Ayush Saxena


Presently in case of IOException the the metrics value returned is 0 which is a 
legal entry.Better to change to a value which could indicate that the value 
hasn't been actually fetched.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14169) RBF: Correct the returned value in case of IOException in NamenodeBeanMetrics#getFederationMetrics

2018-12-23 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14169:

Status: Patch Available  (was: Open)

> RBF: Correct the returned value in case of IOException in 
> NamenodeBeanMetrics#getFederationMetrics
> --
>
> Key: HDFS-14169
> URL: https://issues.apache.org/jira/browse/HDFS-14169
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14169-HDFS-13891-01.patch
>
>
> Presently in case of IOException the the metrics value returned is 0 which is 
> a legal entry.Better to change to a value which could indicate that the value 
> hasn't been actually fetched.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14169) RBF: Correct the returned value in case of IOException in NamenodeBeanMetrics#getFederationMetrics

2018-12-23 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14169:

Attachment: HDFS-14169-HDFS-13891-01.patch

> RBF: Correct the returned value in case of IOException in 
> NamenodeBeanMetrics#getFederationMetrics
> --
>
> Key: HDFS-14169
> URL: https://issues.apache.org/jira/browse/HDFS-14169
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14169-HDFS-13891-01.patch
>
>
> Presently in case of IOException the the metrics value returned is 0 which is 
> a legal entry.Better to change to a value which could indicate that the value 
> hasn't been actually fetched.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14165) In NameNode UI under DataNode tab ,the Capacity column is Non-Aligned

2018-12-23 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16727940#comment-16727940
 ] 

Hudson commented on HDFS-14165:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15659 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15659/])
HDFS-14165. In NameNode UI under DataNode tab ,the Capacity column is 
(surendralilhore: rev d944d5ec460aad14f4d086de2dbd26445680039d)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html


> In NameNode UI under DataNode tab ,the Capacity column is Non-Aligned
> -
>
> Key: HDFS-14165
> URL: https://issues.apache.org/jira/browse/HDFS-14165
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shubham Dewan
>Assignee: Shubham Dewan
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: After-Fix(Aligned).png, HDFS-14165.001.patch, 
> HDFS-14165.002.patch, image-2018-12-21-15-02-18-173.png
>
>
> !image-2018-12-21-15-02-18-173.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14165) In NameNode UI under DataNode tab ,the Capacity column is Non-Aligned

2018-12-23 Thread Surendra Singh Lilhore (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-14165:
--
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thanks [~shubham.dewan] for contribution

Committed to trunk.

> In NameNode UI under DataNode tab ,the Capacity column is Non-Aligned
> -
>
> Key: HDFS-14165
> URL: https://issues.apache.org/jira/browse/HDFS-14165
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shubham Dewan
>Assignee: Shubham Dewan
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: After-Fix(Aligned).png, HDFS-14165.001.patch, 
> HDFS-14165.002.patch, image-2018-12-21-15-02-18-173.png
>
>
> !image-2018-12-21-15-02-18-173.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14165) In NameNode UI under DataNode tab ,the Capacity column is Non-Aligned

2018-12-23 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16727832#comment-16727832
 ] 

Surendra Singh Lilhore commented on HDFS-14165:
---

Thanks [~shubham.dewan]

+1, committing this shortly

> In NameNode UI under DataNode tab ,the Capacity column is Non-Aligned
> -
>
> Key: HDFS-14165
> URL: https://issues.apache.org/jira/browse/HDFS-14165
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shubham Dewan
>Assignee: Shubham Dewan
>Priority: Minor
> Attachments: After-Fix(Aligned).png, HDFS-14165.001.patch, 
> HDFS-14165.002.patch, image-2018-12-21-15-02-18-173.png
>
>
> !image-2018-12-21-15-02-18-173.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org