[jira] [Updated] (HDFS-13286) Add haadmin commands to transition between standby and observer

2018-04-23 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13286:

Attachment: HDFS-13286-HDFS-12943.001.patch

> Add haadmin commands to transition between standby and observer
> ---
>
> Key: HDFS-13286
> URL: https://issues.apache.org/jira/browse/HDFS-13286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13286-HDFS-12943.000.patch, 
> HDFS-13286-HDFS-12943.001.patch
>
>
> As discussed in HDFS-12975, we should allow explicit transition between 
> standby and observer through haadmin command, such as:
> {code}
> haadmin -transitionToObserver
> {code}
> Initially we should support transition from observer to standby, and standby 
> to observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13286) Add haadmin commands to transition between standby and observer

2018-04-23 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449277#comment-16449277
 ] 

Chao Sun commented on HDFS-13286:
-

Thanks [~vagarychen] and [~ajayydv] for the review! I'll address them and 
submit another patch shortly.

bq. HAAdmin#transitionToObserver L237 doesn't print error message. Seems 
checkSupportObserver was added to check if target supports Observer and print 
error message but its not used currently.

Yes you are right. I added this method but forgot to use it...

bq. rename checkManualStateManagementOK to something like 
isManualStateChangeAllowed?

This method is not related to this JIRA so perhaps we should leave it out for 
now?

> Add haadmin commands to transition between standby and observer
> ---
>
> Key: HDFS-13286
> URL: https://issues.apache.org/jira/browse/HDFS-13286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13286-HDFS-12943.000.patch
>
>
> As discussed in HDFS-12975, we should allow explicit transition between 
> standby and observer through haadmin command, such as:
> {code}
> haadmin -transitionToObserver
> {code}
> Initially we should support transition from observer to standby, and standby 
> to observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13286) Add haadmin commands to transition between standby and observer

2018-04-23 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449263#comment-16449263
 ] 

Ajay Kumar commented on HDFS-13286:
---

[~csun], thanks for working on this. Few minor nits:
* HAAdmin#transitionToObserver L237 doesn't print error message. Seems 
checkSupportObserver was added to check if target supports Observer and print 
error message but its not used currently.
* rename checkManualStateManagementOK to something like 
isManualStateChangeAllowed?
* Update HAAdmin L75 help msg to  "Transitions the service to Observer state."

> Add haadmin commands to transition between standby and observer
> ---
>
> Key: HDFS-13286
> URL: https://issues.apache.org/jira/browse/HDFS-13286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13286-HDFS-12943.000.patch
>
>
> As discussed in HDFS-12975, we should allow explicit transition between 
> standby and observer through haadmin command, such as:
> {code}
> haadmin -transitionToObserver
> {code}
> Initially we should support transition from observer to standby, and standby 
> to observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-04-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449255#comment-16449255
 ] 

genericqa commented on HDFS-13399:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
34s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m 
53s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
53s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
40s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
55s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
30s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 30m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 14s{color} | {color:orange} root: The patch generated 14 new + 544 unchanged 
- 0 fixed = 558 total (was 544) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
31s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
45s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}112m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}278m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestStateAlignmentContextWithHA |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13399 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920371/HDFS-13399-HDFS-12943.003.patch
 |
| Optional Tests |  asflicense  compile  

[jira] [Commented] (HDFS-13490) RBF: Fix setSafeMode in the Router

2018-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449236#comment-16449236
 ] 

Hudson commented on HDFS-13490:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14054 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14054/])
HDFS-13490. RBF: Fix setSafeMode in the Router. Contributed by Inigo (yqlin: 
rev b06601acce38ed60b726b99e2830f38a1ee3d2b5)
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestSafeMode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java


> RBF: Fix setSafeMode in the Router
> --
>
> Key: HDFS-13490
> URL: https://issues.apache.org/jira/browse/HDFS-13490
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.1
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13490.000.patch, HDFS-13490.001.patch
>
>
> RouterRpcServer doesn't handle the isChecked parameter correctly when 
> forwarding setSafeMode to the namenodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13490) RBF: Fix setSafeMode in the Router

2018-04-23 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13490:
-
Affects Version/s: 3.0.1

> RBF: Fix setSafeMode in the Router
> --
>
> Key: HDFS-13490
> URL: https://issues.apache.org/jira/browse/HDFS-13490
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.1
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13490.000.patch, HDFS-13490.001.patch
>
>
> RouterRpcServer doesn't handle the isChecked parameter correctly when 
> forwarding setSafeMode to the namenodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13490) RBF: Fix setSafeMode in the Router

2018-04-23 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13490:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 3.0.4
  2.9.2
  3.1.1
  3.2.0
  2.10.0
Target Version/s: 3.2.0, 3.1.1
  Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-3.1, branch-3.0, branch-2 and branch-2.9.
Thanks [~elgoiri] for the contribution.

> RBF: Fix setSafeMode in the Router
> --
>
> Key: HDFS-13490
> URL: https://issues.apache.org/jira/browse/HDFS-13490
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13490.000.patch, HDFS-13490.001.patch
>
>
> RouterRpcServer doesn't handle the isChecked parameter correctly when 
> forwarding setSafeMode to the namenodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13490) RBF: Fix setSafeMode in the Router

2018-04-23 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449225#comment-16449225
 ] 

Yiqun Lin commented on HDFS-13490:
--

LGTM, +1. Committing this.

> RBF: Fix setSafeMode in the Router
> --
>
> Key: HDFS-13490
> URL: https://issues.apache.org/jira/browse/HDFS-13490
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13490.000.patch, HDFS-13490.001.patch
>
>
> RouterRpcServer doesn't handle the isChecked parameter correctly when 
> forwarding setSafeMode to the namenodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13336) Test cases of TestWriteToReplica failed in windows

2018-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449196#comment-16449196
 ] 

Hudson commented on HDFS-13336:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14053 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14053/])
HDFS-13336. Test cases of TestWriteToReplica failed in windows. (inigoiri: rev 
df92a17e02fe86279a6f4e413719d0a465b50837)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestWriteToReplica.java


> Test cases of TestWriteToReplica failed in windows
> --
>
> Key: HDFS-13336
> URL: https://issues.apache.org/jira/browse/HDFS-13336
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13336.000.patch, HDFS-13336.001.patch, 
> HDFS-13336.002.patch, HDFS-13336.003.patch
>
>
> Test cases of TestWriteToReplica failed in windows with errors like:
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Error Details
> Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Stack Trace
> java.io.IOException: Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1011)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:932)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:864)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:497) at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:456) 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica.testAppend(TestWriteToReplica.java:89)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13493) Reduce the HttpServer2 thread count on DataNodes

2018-04-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449190#comment-16449190
 ] 

Íñigo Goiri commented on HDFS-13493:


[^HDFS-13272.000.patch] looks good.
The unit test doesn't seem related.
As [~xkrogen] mentions, this shouldn't have an impact to the DN as this is the 
info server.
+1

> Reduce the HttpServer2 thread count on DataNodes
> 
>
> Key: HDFS-13493
> URL: https://issues.apache.org/jira/browse/HDFS-13493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13272.000.patch
>
>
> Given that HFTP was removed in Hadoop 3 and WebHDFS is handled via Netty, the 
> HttpServer2 instance within the DataNode is only used for very basic tasks 
> such as the web UI. Thus we can safely reduce the thread count used here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13336) Test cases of TestWriteToReplica failed in windows

2018-04-23 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13336:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.4
   2.9.2
   3.1.1
   3.2.0
   2.10.0
   Status: Resolved  (was: Patch Available)

> Test cases of TestWriteToReplica failed in windows
> --
>
> Key: HDFS-13336
> URL: https://issues.apache.org/jira/browse/HDFS-13336
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13336.000.patch, HDFS-13336.001.patch, 
> HDFS-13336.002.patch, HDFS-13336.003.patch
>
>
> Test cases of TestWriteToReplica failed in windows with errors like:
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Error Details
> Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Stack Trace
> java.io.IOException: Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1011)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:932)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:864)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:497) at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:456) 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica.testAppend(TestWriteToReplica.java:89)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13336) Test cases of TestWriteToReplica failed in windows

2018-04-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449189#comment-16449189
 ] 

Íñigo Goiri commented on HDFS-13336:


Thanks [~surmountian] for the fix.
Committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9.

> Test cases of TestWriteToReplica failed in windows
> --
>
> Key: HDFS-13336
> URL: https://issues.apache.org/jira/browse/HDFS-13336
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13336.000.patch, HDFS-13336.001.patch, 
> HDFS-13336.002.patch, HDFS-13336.003.patch
>
>
> Test cases of TestWriteToReplica failed in windows with errors like:
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Error Details
> Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Stack Trace
> java.io.IOException: Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1011)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:932)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:864)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:497) at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:456) 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica.testAppend(TestWriteToReplica.java:89)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13485) DataNode WebHDFS endpoint throws NPE

2018-04-23 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449165#comment-16449165
 ] 

Wei-Chiu Chuang commented on HDFS-13485:


Probably make sense to throw HadoopIllegalArgumentException instead.

> DataNode WebHDFS endpoint throws NPE
> 
>
> Key: HDFS-13485
> URL: https://issues.apache.org/jira/browse/HDFS-13485
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, webhdfs
>Affects Versions: 3.0.0
> Environment: Kerberized. Hadoop 3.0.0, WebHDFS.
>Reporter: Wei-Chiu Chuang
>Priority: Minor
>
> curl -k -i --negotiate -u : "https://hadoop3-4.example.com:20004/webhdfs/v1;
> DataNode Web UI should do a better error checking/handling. 
> {noformat}
> 2018-04-19 10:07:49,338 WARN 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler: 
> INTERNAL_SERVER_ERROR
> java.lang.NullPointerException
> at 
> org.apache.hadoop.security.token.Token.decodeWritable(Token.java:364)
> at 
> org.apache.hadoop.security.token.Token.decodeFromUrlString(Token.java:383)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.ParameterParser.delegationToken(ParameterParser.java:128)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugi(DataNodeUGIProvider.java:76)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.channelRead0(WebHdfsHandler.java:129)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.URLDispatcher.channelRead0(URLDispatcher.java:51)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.URLDispatcher.channelRead0(URLDispatcher.java:31)
> at 
> com.cloudera.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at 
> com.cloudera.io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at 
> com.cloudera.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
> at 
> com.cloudera.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at 
> com.cloudera.io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1379)
> at 
> com.cloudera.io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1158)
> at 
> com.cloudera.io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1193)
> at 
> com.cloudera.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
> at 
> com.cloudera.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
> at 
> com.cloudera.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at 
> com.cloudera.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> com.cloudera.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> 

[jira] [Commented] (HDFS-13492) Limit httpfs binds to certain IP addresses in branch-2

2018-04-23 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449164#comment-16449164
 ] 

Wei-Chiu Chuang commented on HDFS-13492:


Hey Ajay, thanks for the review.

Suppose user didn't configure environment variable HTTPFS_HTTP_HOSTNAME (which 
should always be the case since it was not advertised before), then this change 
is backward-compatible: meaning httpfs still binds at all IP addresses.

I can add a release note to this Jira for sure.

> Limit httpfs binds to certain IP addresses in branch-2
> --
>
> Key: HDFS-13492
> URL: https://issues.apache.org/jira/browse/HDFS-13492
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-13492.branch-2.001.patch
>
>
> Currently httpfs binds to all IP addresses of the host by default. Some 
> operators want to limit httpfs to accept only local connections.
> We should provide that option, and it's pretty doable in Hadoop 2.x.
> Note that httpfs underlying implementation changed in Hadoop 3, and the Jetty 
> based httpfs implementation already support that I believe.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13336) Test cases of TestWriteToReplica failed in windows

2018-04-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449161#comment-16449161
 ] 

Íñigo Goiri commented on HDFS-13336:


 [^HDFS-13336.003.patch] LGTM.
+1
I'll commit all the way to 2.9 in the next couple hours if there are no 
concerns.

> Test cases of TestWriteToReplica failed in windows
> --
>
> Key: HDFS-13336
> URL: https://issues.apache.org/jira/browse/HDFS-13336
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13336.000.patch, HDFS-13336.001.patch, 
> HDFS-13336.002.patch, HDFS-13336.003.patch
>
>
> Test cases of TestWriteToReplica failed in windows with errors like:
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Error Details
> Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Stack Trace
> java.io.IOException: Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1011)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:932)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:864)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:497) at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:456) 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica.testAppend(TestWriteToReplica.java:89)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13283) Percentage based Reserved Space Calculation for DataNode

2018-04-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449158#comment-16449158
 ] 

Íñigo Goiri commented on HDFS-13283:


[^HDFS-13283.007.patch] LGTM.
The failed unit tests seem like the usual suspects.
+1

> Percentage based Reserved Space Calculation for DataNode
> 
>
> Key: HDFS-13283
> URL: https://issues.apache.org/jira/browse/HDFS-13283
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13283.000.patch, HDFS-13283.001.patch, 
> HDFS-13283.002.patch, HDFS-13283.003.patch, HDFS-13283.004.patch, 
> HDFS-13283.005.patch, HDFS-13283.006.patch, HDFS-13283.007.patch
>
>
> Currently, the only way to configure reserved disk space for non-HDFS data on 
> a DataNode is a constant value via {{dfs.datanode.du.reserved}}. This can be 
> an issue in non-heterogeneous clusters where size of DNs can differ. The 
> proposed solution is to allow percentage based configuration (and their 
> combination):
>  # ABSOLUTE
>  ** based on absolute number of reserved space
>  # PERCENTAGE
>  ** based on percentage of total capacity in the storage
>  # CONSERVATIVE
>  ** calculates both of the above and takes the one that will yield more 
> reserved space
>  # AGGRESSIVE
>  ** calculates 1. 2. and takes the one that will yield less reserved space
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-3653) 1.x: Add a retention period for purged edit logs

2018-04-23 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-3653.
---
Resolution: Won't Fix

> 1.x: Add a retention period for purged edit logs
> 
>
> Key: HDFS-3653
> URL: https://issues.apache.org/jira/browse/HDFS-3653
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 1.1.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Major
>
> Occasionally we have a bug which causes something to go wrong with edits 
> files. Even more occasionally the bug is such that the namenode mistakenly 
> deletes an {{edits}} file without merging it into {{fsimage}} properly -- e.g 
> if the bug mistakenly writes an OP_INVALID at the top of the log.
> In trunk/2.0 we retain many edit log segments going back in time to be more 
> robust to this kind of error. I'd like to implement something similar (but 
> much simpler) in 1.x, which would be used only by HDFS developers in 
> root-causing or repairing from these rare scenarios: the NN should never 
> directly delete an edit log file. Instead, it should rename the file into 
> some kind of "trash" directory inside the name dir, and associate it with a 
> timestamp. Then, periodically a separate thread should scan the trash dirs 
> and delete any logs older than a configurable time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-3041) DFSOutputStream.close doesn't properly handle interruption

2018-04-23 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon reassigned HDFS-3041:
-

Assignee: (was: Todd Lipcon)

> DFSOutputStream.close doesn't properly handle interruption
> --
>
> Key: HDFS-3041
> URL: https://issues.apache.org/jira/browse/HDFS-3041
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 0.23.0, 2.0.0-alpha
>Reporter: Todd Lipcon
>Priority: Major
> Attachments: test.txt
>
>
> TestHFlush.testHFlushInterrupted can fail occasionally due to a race: if a 
> thread is interrupted while calling close(), then the {{finally}} clause of 
> the {{close}} function sets {{closed = true}}. At this point it has enqueued 
> the "end of block" packet to the DNs, but hasn't called {{completeFile}}. 
> Then, if {{close}} is called again (as in the test case), it will be 
> short-circuited since {{closed}} is already true. Thus {{completeFile}} never 
> ends up getting called. This also means that the test can fail if the 
> pipeline is running slowly, since the assertion that the file is the correct 
> length won't see the last packet or two.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-3069) If an edits file has more edits in it than expected by its name, should trigger an error

2018-04-23 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-3069.
---
  Resolution: Won't Fix
Target Version/s:   (was: )

> If an edits file has more edits in it than expected by its name, should 
> trigger an error
> 
>
> Key: HDFS-3069
> URL: https://issues.apache.org/jira/browse/HDFS-3069
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.23.0, 2.0.0-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Major
>
> In testing what happens in HA split brain scenarios, I ended up with an edits 
> log that was named edits_47-47 but actually had two edits in it (#47 and 
> #48). The edits loading process should detect this situation and barf. 
> Otherwise, the problem shows up later during loading or even on the next 
> restart, and is tough to fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-3447) StandbyException should not be logged at ERROR level on server

2018-04-23 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon reassigned HDFS-3447:
-

Assignee: (was: Todd Lipcon)

> StandbyException should not be logged at ERROR level on server
> --
>
> Key: HDFS-3447
> URL: https://issues.apache.org/jira/browse/HDFS-3447
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha
>Affects Versions: 2.0.0-alpha
>Reporter: Todd Lipcon
>Priority: Minor
>
> Currently, the standby NN will log StandbyExceptions at ERROR level any time 
> a client tries to connect to it. So, if the second NN in an HA pair is 
> active, the first NN will spew a lot of these errors in the log, as each 
> client gets redirected to the proper NN. Instead, this should be at INFO 
> level, and should probably be logged in a less "scary" manner (eg "Received 
> READ request from client 1.2.3.4, but in Standby state. Redirecting client to 
> other NameNode.")



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-5058) QJM should validate startLogSegment() more strictly

2018-04-23 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-5058:
--
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

> QJM should validate startLogSegment() more strictly
> ---
>
> Key: HDFS-5058
> URL: https://issues.apache.org/jira/browse/HDFS-5058
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: qjm
>Affects Versions: 2.1.0-beta, 3.0.0-alpha1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HDFS-5098.patch, hdfs-5058.txt
>
>
> We've seen a small handful of times a case where one of the NNs in an HA 
> cluster ends up with an fsimage checkpoint that falls in the middle of an 
> edit segment. We're not sure yet how this happens, but one issue can happen 
> as a result:
> - Node has fsimage_500. Cluster has edits_1-1000, edits_1001_inprogress
> - Node restarts, loads fsimage_500
> - Node wants to become active. It calls selectInputStreams(500). Currently, 
> this API logs a WARN that 500 falls in the middle of the 1-1000 segment, but 
> continues and returns no results.
> - Node calls startLogSegment(501).
> Currently, the QJM will accept this (incorrectly). The node then crashes when 
> it first tries to journal a real transaction, but it ends up leaving the 
> edits_501_inprogress lying around, potentially causing more issues later.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13493) Reduce the HttpServer2 thread count on DataNodes

2018-04-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449140#comment-16449140
 ] 

genericqa commented on HDFS-13493:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13493 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920366/HDFS-13272.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 77f55dbc0ec5 3.13.0-141-generic #190-Ubuntu SMP Fri Jan 19 
12:52:38 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7ab08a9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24050/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24050/testReport/ |
| Max. process+thread count | 3575 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-13286) Add haadmin commands to transition between standby and observer

2018-04-23 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449121#comment-16449121
 ] 

Chen Liang commented on HDFS-13286:
---

Thanks [~csun] for working on this! the patch LGTM, only two minor nits:

DummyHAService#transitionToObserver : eqInfo -> req to be consistent with the 
other methods.

StandbyState#setState : make the two if check one?

> Add haadmin commands to transition between standby and observer
> ---
>
> Key: HDFS-13286
> URL: https://issues.apache.org/jira/browse/HDFS-13286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13286-HDFS-12943.000.patch
>
>
> As discussed in HDFS-12975, we should allow explicit transition between 
> standby and observer through haadmin command, such as:
> {code}
> haadmin -transitionToObserver
> {code}
> Initially we should support transition from observer to standby, and standby 
> to observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13336) Test cases of TestWriteToReplica failed in windows

2018-04-23 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449120#comment-16449120
 ] 

Xiao Liang commented on HDFS-13336:
---

Failed tests are not related to this change, it should be good?

> Test cases of TestWriteToReplica failed in windows
> --
>
> Key: HDFS-13336
> URL: https://issues.apache.org/jira/browse/HDFS-13336
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13336.000.patch, HDFS-13336.001.patch, 
> HDFS-13336.002.patch, HDFS-13336.003.patch
>
>
> Test cases of TestWriteToReplica failed in windows with errors like:
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Error Details
> Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Stack Trace
> java.io.IOException: Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1011)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:932)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:864)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:497) at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:456) 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica.testAppend(TestWriteToReplica.java:89)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13336) Test cases of TestWriteToReplica failed in windows

2018-04-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449081#comment-16449081
 ] 

genericqa commented on HDFS-13336:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 21 unchanged - 4 fixed = 21 total (was 25) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 51s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}114m 52s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}169m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13336 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920341/HDFS-13336.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c1a0c4dae069 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 42e82f0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24047/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24047/testReport/ |
| Max. process+thread count | 3780 (vs. 

[jira] [Commented] (HDFS-13283) Percentage based Reserved Space Calculation for DataNode

2018-04-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449052#comment-16449052
 ] 

genericqa commented on HDFS-13283:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 16s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}120m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}175m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13283 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920336/HDFS-13283.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux a9df7d936b43 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f411de6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24046/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 

[jira] [Commented] (HDFS-13326) RBF: Improve the interfaces to modify and view mount tables

2018-04-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449050#comment-16449050
 ] 

Íñigo Goiri commented on HDFS-13326:


The errors for  [^HDFS-13326.002.patch] are because of HDFS and the commands md.
+1

> RBF: Improve the interfaces to modify and view mount tables
> ---
>
> Key: HDFS-13326
> URL: https://issues.apache.org/jira/browse/HDFS-13326
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Gang Li
>Priority: Minor
> Attachments: HDFS-13326.000.patch, HDFS-13326.001.patch, 
> HDFS-13326.002.patch
>
>
> From DFSRouterAdmin cmd, currently the update logic is implemented inside add 
> operation, where it has some limitation (e.g. cannot update "readonly" or 
> removing a destination).  Given the RPC alreadys separate add and update 
> operations, it would be better to do the same in cmd level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb

2018-04-23 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449032#comment-16449032
 ] 

Bharat Viswanadham commented on HDFS-13356:
---

I have committed this to trunk and branch-3.1

Thank You [~shv] for the review.

The failure is due to mismatch of the proto version on the machine.
[ERROR] Failed to execute goal 
org.apache.hadoop:hadoop-maven-plugins:3.2.0-SNAPSHOT:protoc (compile-protoc) 
on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 
protoc version is 'libprotoc 2.6.1', expected version is '2.5.0' -> [Help 1]
I am able to successfully compile it on my dev machine.

> Balancer:Set default value of minBlockSize to 10mb 
> ---
>
> Key: HDFS-13356
> URL: https://issues.apache.org/jira/browse/HDFS-13356
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 2.7.5
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: balancer, upgrades
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HDFS-13356.00.patch, HDFS-13356.01.patch, 
> HDFS-13356.02.patch
>
>
>  It seems we can run into a problem while a rolling upgrade with this.
> The Balancer is upgraded after NameNodes, so once NN is upgraded it will 
> expect {{minBlockSize}} parameter in {{getBlocks()}}. The Balancer cannot 
> send it yet, so NN will use the default, which you set to 0. So NN will start 
> unexpectedly sending small blocks to the Balancer. So we should
>  # either change the default in protobuf to 10 MB
>  # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use 
> the configuration variable 
> {{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}.
> If you agree, we should create a follow up jira. I wanted to backport this 
> down the chain of branches, but this upgrade scenario is stopping me.
> [~shv] commented this in  HDFS-13222 jira.
> https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13492) Limit httpfs binds to certain IP addresses in branch-2

2018-04-23 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449029#comment-16449029
 ] 

Ajay Kumar commented on HDFS-13492:
---

[~jojochuang], thanks for working on this. Patch looks good. Shall we add 
documentation for this change, since we will limit connectivity to local 
clients?

> Limit httpfs binds to certain IP addresses in branch-2
> --
>
> Key: HDFS-13492
> URL: https://issues.apache.org/jira/browse/HDFS-13492
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-13492.branch-2.001.patch
>
>
> Currently httpfs binds to all IP addresses of the host by default. Some 
> operators want to limit httpfs to accept only local connections.
> We should provide that option, and it's pretty doable in Hadoop 2.x.
> Note that httpfs underlying implementation changed in Hadoop 3, and the Jetty 
> based httpfs implementation already support that I believe.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13326) RBF: Improve the interfaces to modify and view mount tables

2018-04-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449024#comment-16449024
 ] 

genericqa commented on HDFS-13326:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  1s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 52s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
56s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}219m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13326 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920326/HDFS-13326.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  

[jira] [Updated] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb

2018-04-23 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13356:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.1
   3.2.0
   Status: Resolved  (was: Patch Available)

> Balancer:Set default value of minBlockSize to 10mb 
> ---
>
> Key: HDFS-13356
> URL: https://issues.apache.org/jira/browse/HDFS-13356
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 2.7.5
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: balancer, upgrades
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HDFS-13356.00.patch, HDFS-13356.01.patch, 
> HDFS-13356.02.patch
>
>
>  It seems we can run into a problem while a rolling upgrade with this.
> The Balancer is upgraded after NameNodes, so once NN is upgraded it will 
> expect {{minBlockSize}} parameter in {{getBlocks()}}. The Balancer cannot 
> send it yet, so NN will use the default, which you set to 0. So NN will start 
> unexpectedly sending small blocks to the Balancer. So we should
>  # either change the default in protobuf to 10 MB
>  # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use 
> the configuration variable 
> {{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}.
> If you agree, we should create a follow up jira. I wanted to backport this 
> down the chain of branches, but this upgrade scenario is stopping me.
> [~shv] commented this in  HDFS-13222 jira.
> https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13468) Add erasure coding metrics into ReadStatistics

2018-04-23 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449019#comment-16449019
 ] 

Xiao Chen commented on HDFS-13468:
--

Thanks for the work Eddy.

 LGTM overall. It seems we cannot unit test remote bytes easily, same as all 
existing tests - so I think it should be fine.

+1 pending a typo fix in the test name {{testStatisticsForEresureCodingRead}}: 
s/Eresure/Erasure/g

> Add erasure coding metrics into ReadStatistics
> --
>
> Key: HDFS-13468
> URL: https://issues.apache.org/jira/browse/HDFS-13468
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.1.0, 3.0.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Major
> Attachments: HDFS-13468.00.patch, HDFS-13468.01.patch
>
>
> Expose Erasure Coding related metrics for InputStream in ReadStatistics. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb

2018-04-23 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13356:
--
Target Version/s: 3.1.1  (was: 3.0.2, 3.1.1)

> Balancer:Set default value of minBlockSize to 10mb 
> ---
>
> Key: HDFS-13356
> URL: https://issues.apache.org/jira/browse/HDFS-13356
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 2.7.5
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: balancer, upgrades
> Attachments: HDFS-13356.00.patch, HDFS-13356.01.patch, 
> HDFS-13356.02.patch
>
>
>  It seems we can run into a problem while a rolling upgrade with this.
> The Balancer is upgraded after NameNodes, so once NN is upgraded it will 
> expect {{minBlockSize}} parameter in {{getBlocks()}}. The Balancer cannot 
> send it yet, so NN will use the default, which you set to 0. So NN will start 
> unexpectedly sending small blocks to the Balancer. So we should
>  # either change the default in protobuf to 10 MB
>  # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use 
> the configuration variable 
> {{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}.
> If you agree, we should create a follow up jira. I wanted to backport this 
> down the chain of branches, but this upgrade scenario is stopping me.
> [~shv] commented this in  HDFS-13222 jira.
> https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-04-23 Thread Plamen Jeliazkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Plamen Jeliazkov updated HDFS-13399:

Attachment: HDFS-13399-HDFS-12943.003.patch

> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch, 
> HDFS-13399-HDFS-12943.001.patch, HDFS-13399-HDFS-12943.002.patch, 
> HDFS-13399-HDFS-12943.003.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb

2018-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449013#comment-16449013
 ] 

Hudson commented on HDFS-13356:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #14052 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14052/])
HDFS-13356. Balancer:Set default value of minBlockSize to 10mb. (bharat: rev 
9b5375e0c1ee8c634a5accb7415ec27440543a60)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/proto/NamenodeProtocol.proto


> Balancer:Set default value of minBlockSize to 10mb 
> ---
>
> Key: HDFS-13356
> URL: https://issues.apache.org/jira/browse/HDFS-13356
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 2.7.5
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: balancer, upgrades
> Attachments: HDFS-13356.00.patch, HDFS-13356.01.patch, 
> HDFS-13356.02.patch
>
>
>  It seems we can run into a problem while a rolling upgrade with this.
> The Balancer is upgraded after NameNodes, so once NN is upgraded it will 
> expect {{minBlockSize}} parameter in {{getBlocks()}}. The Balancer cannot 
> send it yet, so NN will use the default, which you set to 0. So NN will start 
> unexpectedly sending small blocks to the Balancer. So we should
>  # either change the default in protobuf to 10 MB
>  # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use 
> the configuration variable 
> {{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}.
> If you agree, we should create a follow up jira. I wanted to backport this 
> down the chain of branches, but this upgrade scenario is stopping me.
> [~shv] commented this in  HDFS-13222 jira.
> https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-04-23 Thread Plamen Jeliazkov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449015#comment-16449015
 ] 

Plamen Jeliazkov commented on HDFS-13399:
-

I was able to remove most of the changes around 
{{NameNodeProxies.createNonHAProxyWithClientProtocol}} by going with my 
proposal.

I moved most of the unit tests into {{TestStateAlignmentContextWithHA}} and 
reduced {{TestStateAlignmentContext}} to checking that there is no client state 
change if there is no HA.

Everything else remains the same.

> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch, 
> HDFS-13399-HDFS-12943.001.patch, HDFS-13399-HDFS-12943.002.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13326) RBF: Improve the interfaces to modify and view mount tables

2018-04-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449011#comment-16449011
 ] 

genericqa commented on HDFS-13326:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
47s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}224m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13326 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920327/HDFS-13326.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d2c0ab693e9e 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f411de6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Updated] (HDFS-13493) Reduce the HttpServer2 thread count on DataNodes

2018-04-23 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13493:
---
Attachment: HDFS-13272.000.patch

> Reduce the HttpServer2 thread count on DataNodes
> 
>
> Key: HDFS-13493
> URL: https://issues.apache.org/jira/browse/HDFS-13493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13272.000.patch
>
>
> Given that HFTP was removed in Hadoop 3 and WebHDFS is handled via Netty, the 
> HttpServer2 instance within the DataNode is only used for very basic tasks 
> such as the web UI. Thus we can safely reduce the thread count used here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13272) DataNodeHttpServer to have configurable HttpServer2 threads

2018-04-23 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448993#comment-16448993
 ] 

Erik Krogen edited comment on HDFS-13272 at 4/23/18 11:00 PM:
--

Sounds good to me. I have patches put together for both branch-2 and trunk, but 
given that they do very different things (one is adding a new config, one is 
just reducing the value used), I have created a separate JIRA for the second: 
HDFS-13493. I have attached the branch-2 patch here. I marked the new config as 
{{@Deprecated}} given that it is removed in Hadoop 3.

Edit: Actually, I realized publicizing the new config and marking as deprecated 
is overkill considering that it is only used by MiniDFSCluster. I instead added 
it as an {{@InterfaceAudience.Private}} config within {{DatanodeHttpServer}} 
which is just used by {{MiniDFSCluster}}. This is the v001 patch.


was (Author: xkrogen):
Sounds good to me. I have patches put together for both branch-2 and trunk, but 
given that they do very different things (one is adding a new config, one is 
just reducing the value used), I have created a separate JIRA for the second: 
HDFS-13493. I have attached the branch-2 patch here. I marked the new config as 
{{@Deprecated}} given that it is removed in Hadoop 3.

Edit: Actually, I realized publicizing the new config and marking as deprecated 
is overkill considering that it is only used by MiniDFSCluster. I instead added 
it as an {{@InterfaceAudience.Private}} config within {{DatanodeHttpServer}} 
which is just used by {{MiniDFSCluster}}.

> DataNodeHttpServer to have configurable HttpServer2 threads
> ---
>
> Key: HDFS-13272
> URL: https://issues.apache.org/jira/browse/HDFS-13272
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13272-branch-2.000.patch, 
> HDFS-13272-branch-2.001.patch
>
>
> In HDFS-7279, the Jetty server on the DataNode was hard-coded to use 10 
> threads. In addition to the possibility of this being too few threads, it is 
> much higher than necessary in resource constrained environments such as 
> MiniDFSCluster. To avoid compatibility issues, rather than using 
> {{HttpServer2#HTTP_MAX_THREADS}} directly, we can introduce a new 
> configuration for the DataNode's thread pool size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb

2018-04-23 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449003#comment-16449003
 ] 

Bharat Viswanadham commented on HDFS-13356:
---

Thank You [~shv] for review.

Will commit it shortly.

> Balancer:Set default value of minBlockSize to 10mb 
> ---
>
> Key: HDFS-13356
> URL: https://issues.apache.org/jira/browse/HDFS-13356
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 2.7.5
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: balancer, upgrades
> Attachments: HDFS-13356.00.patch, HDFS-13356.01.patch, 
> HDFS-13356.02.patch
>
>
>  It seems we can run into a problem while a rolling upgrade with this.
> The Balancer is upgraded after NameNodes, so once NN is upgraded it will 
> expect {{minBlockSize}} parameter in {{getBlocks()}}. The Balancer cannot 
> send it yet, so NN will use the default, which you set to 0. So NN will start 
> unexpectedly sending small blocks to the Balancer. So we should
>  # either change the default in protobuf to 10 MB
>  # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use 
> the configuration variable 
> {{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}.
> If you agree, we should create a follow up jira. I wanted to backport this 
> down the chain of branches, but this upgrade scenario is stopping me.
> [~shv] commented this in  HDFS-13222 jira.
> https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13272) DataNodeHttpServer to have configurable HttpServer2 threads

2018-04-23 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448993#comment-16448993
 ] 

Erik Krogen edited comment on HDFS-13272 at 4/23/18 11:03 PM:
--

Sounds good to me. I have patches put together for both branch-2 and trunk, but 
given that they do very different things (one is adding a new config, one is 
just reducing the value used), I have created a separate JIRA for the second: 
HDFS-13493. I have attached the branch-2 patch here. I marked the new config as 
{{@Deprecated}} given that it is removed in Hadoop 3. I also set this to its 
minimum within MiniDFSCluster.

Edit: Actually, I realized publicizing the new config and marking as deprecated 
is overkill considering that it is only used by MiniDFSCluster. I instead added 
it as an {{@InterfaceAudience.Private}} config within {{DatanodeHttpServer}} 
which is just used by {{MiniDFSCluster}}. This is the v001 patch.


was (Author: xkrogen):
Sounds good to me. I have patches put together for both branch-2 and trunk, but 
given that they do very different things (one is adding a new config, one is 
just reducing the value used), I have created a separate JIRA for the second: 
HDFS-13493. I have attached the branch-2 patch here. I marked the new config as 
{{@Deprecated}} given that it is removed in Hadoop 3.

Edit: Actually, I realized publicizing the new config and marking as deprecated 
is overkill considering that it is only used by MiniDFSCluster. I instead added 
it as an {{@InterfaceAudience.Private}} config within {{DatanodeHttpServer}} 
which is just used by {{MiniDFSCluster}}. This is the v001 patch.

> DataNodeHttpServer to have configurable HttpServer2 threads
> ---
>
> Key: HDFS-13272
> URL: https://issues.apache.org/jira/browse/HDFS-13272
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13272-branch-2.000.patch, 
> HDFS-13272-branch-2.001.patch
>
>
> In HDFS-7279, the Jetty server on the DataNode was hard-coded to use 10 
> threads. In addition to the possibility of this being too few threads, it is 
> much higher than necessary in resource constrained environments such as 
> MiniDFSCluster. To avoid compatibility issues, rather than using 
> {{HttpServer2#HTTP_MAX_THREADS}} directly, we can introduce a new 
> configuration for the DataNode's thread pool size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13493) Reduce the HttpServer2 thread count on DataNodes

2018-04-23 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449005#comment-16449005
 ] 

Erik Krogen edited comment on HDFS-13493 at 4/23/18 11:02 PM:
--

This requires setting the acceptor and selector thread counts as well as the 
max. Attaching v000 patch for the same.


was (Author: xkrogen):
This requires setting the acceptor and selector thread counts as well as the 
max.

> Reduce the HttpServer2 thread count on DataNodes
> 
>
> Key: HDFS-13493
> URL: https://issues.apache.org/jira/browse/HDFS-13493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13272.000.patch
>
>
> Given that HFTP was removed in Hadoop 3 and WebHDFS is handled via Netty, the 
> HttpServer2 instance within the DataNode is only used for very basic tasks 
> such as the web UI. Thus we can safely reduce the thread count used here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13493) Reduce the HttpServer2 thread count on DataNodes

2018-04-23 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449005#comment-16449005
 ] 

Erik Krogen commented on HDFS-13493:


This requires setting the acceptor and selector thread counts as well as the 
max.

> Reduce the HttpServer2 thread count on DataNodes
> 
>
> Key: HDFS-13493
> URL: https://issues.apache.org/jira/browse/HDFS-13493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13272.000.patch
>
>
> Given that HFTP was removed in Hadoop 3 and WebHDFS is handled via Netty, the 
> HttpServer2 instance within the DataNode is only used for very basic tasks 
> such as the web UI. Thus we can safely reduce the thread count used here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13493) Reduce the HttpServer2 thread count on DataNodes

2018-04-23 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13493:
---
Status: Patch Available  (was: Open)

> Reduce the HttpServer2 thread count on DataNodes
> 
>
> Key: HDFS-13493
> URL: https://issues.apache.org/jira/browse/HDFS-13493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13272.000.patch
>
>
> Given that HFTP was removed in Hadoop 3 and WebHDFS is handled via Netty, the 
> HttpServer2 instance within the DataNode is only used for very basic tasks 
> such as the web UI. Thus we can safely reduce the thread count used here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13493) Reduce the HttpServer2 thread count on DataNodes

2018-04-23 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13493:
---
Target Version/s: 3.1.0, 3.0.3  (was: 3.0.3)

> Reduce the HttpServer2 thread count on DataNodes
> 
>
> Key: HDFS-13493
> URL: https://issues.apache.org/jira/browse/HDFS-13493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13272.000.patch
>
>
> Given that HFTP was removed in Hadoop 3 and WebHDFS is handled via Netty, the 
> HttpServer2 instance within the DataNode is only used for very basic tasks 
> such as the web UI. Thus we can safely reduce the thread count used here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13272) DataNodeHttpServer to have configurable HttpServer2 threads

2018-04-23 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448993#comment-16448993
 ] 

Erik Krogen edited comment on HDFS-13272 at 4/23/18 10:59 PM:
--

Sounds good to me. I have patches put together for both branch-2 and trunk, but 
given that they do very different things (one is adding a new config, one is 
just reducing the value used), I have created a separate JIRA for the second: 
HDFS-13493. I have attached the branch-2 patch here. I marked the new config as 
{{@Deprecated}} given that it is removed in Hadoop 3.

Edit: Actually, I realized publicizing the new config and marking as deprecated 
is overkill considering that it is only used by MiniDFSCluster. I instead added 
it as an {{@InterfaceAudience.Private}} config within {{DatanodeHttpServer}} 
which is just used by {{MiniDFSCluster}}.


was (Author: xkrogen):
Sounds good to me. I have patches put together for both branch-2 and trunk, but 
given that they do very different things (one is adding a new config, one is 
just reducing the value used), I have created a separate JIRA for the second: 
HDFS-13493. I have attached the branch-2 patch here. I marked the new config as 
{{@Deprecated}} given that it is removed in Hadoop 3.

> DataNodeHttpServer to have configurable HttpServer2 threads
> ---
>
> Key: HDFS-13272
> URL: https://issues.apache.org/jira/browse/HDFS-13272
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13272-branch-2.000.patch, 
> HDFS-13272-branch-2.001.patch
>
>
> In HDFS-7279, the Jetty server on the DataNode was hard-coded to use 10 
> threads. In addition to the possibility of this being too few threads, it is 
> much higher than necessary in resource constrained environments such as 
> MiniDFSCluster. To avoid compatibility issues, rather than using 
> {{HttpServer2#HTTP_MAX_THREADS}} directly, we can introduce a new 
> configuration for the DataNode's thread pool size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13272) DataNodeHttpServer to have configurable HttpServer2 threads

2018-04-23 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13272:
---
Attachment: HDFS-13272-branch-2.001.patch

> DataNodeHttpServer to have configurable HttpServer2 threads
> ---
>
> Key: HDFS-13272
> URL: https://issues.apache.org/jira/browse/HDFS-13272
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13272-branch-2.000.patch, 
> HDFS-13272-branch-2.001.patch
>
>
> In HDFS-7279, the Jetty server on the DataNode was hard-coded to use 10 
> threads. In addition to the possibility of this being too few threads, it is 
> much higher than necessary in resource constrained environments such as 
> MiniDFSCluster. To avoid compatibility issues, rather than using 
> {{HttpServer2#HTTP_MAX_THREADS}} directly, we can introduce a new 
> configuration for the DataNode's thread pool size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13492) Limit httpfs binds to certain IP addresses in branch-2

2018-04-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448998#comment-16448998
 ] 

genericqa commented on HDFS-13492:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
11s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
50s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:f667ef1 |
| JIRA Issue | HDFS-13492 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920357/HDFS-13492.branch-2.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux 97cf6f06e64a 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / 99e82e2 |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_171 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24048/testReport/ |
| Max. process+thread count | 316 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24048/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Limit httpfs binds to certain IP addresses in branch-2
> --
>
> Key: HDFS-13492
> URL: https://issues.apache.org/jira/browse/HDFS-13492
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-13492.branch-2.001.patch
>
>
> Currently httpfs binds to all IP addresses of the host by default. Some 
> operators want to limit httpfs to accept only local connections.
> We should provide that option, and it's pretty doable in Hadoop 2.x.
> Note 

[jira] [Updated] (HDFS-13272) DataNodeHttpServer to have configurable HttpServer2 threads

2018-04-23 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13272:
---
Target Version/s: 2.10.0

> DataNodeHttpServer to have configurable HttpServer2 threads
> ---
>
> Key: HDFS-13272
> URL: https://issues.apache.org/jira/browse/HDFS-13272
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13272-branch-2.000.patch
>
>
> In HDFS-7279, the Jetty server on the DataNode was hard-coded to use 10 
> threads. In addition to the possibility of this being too few threads, it is 
> much higher than necessary in resource constrained environments such as 
> MiniDFSCluster. To avoid compatibility issues, rather than using 
> {{HttpServer2#HTTP_MAX_THREADS}} directly, we can introduce a new 
> configuration for the DataNode's thread pool size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13272) DataNodeHttpServer to have configurable HttpServer2 threads

2018-04-23 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13272:
---
Status: Patch Available  (was: Open)

> DataNodeHttpServer to have configurable HttpServer2 threads
> ---
>
> Key: HDFS-13272
> URL: https://issues.apache.org/jira/browse/HDFS-13272
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13272-branch-2.000.patch
>
>
> In HDFS-7279, the Jetty server on the DataNode was hard-coded to use 10 
> threads. In addition to the possibility of this being too few threads, it is 
> much higher than necessary in resource constrained environments such as 
> MiniDFSCluster. To avoid compatibility issues, rather than using 
> {{HttpServer2#HTTP_MAX_THREADS}} directly, we can introduce a new 
> configuration for the DataNode's thread pool size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13272) DataNodeHttpServer to have configurable HttpServer2 threads

2018-04-23 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13272:
---
Attachment: HDFS-13272-branch-2.000.patch

> DataNodeHttpServer to have configurable HttpServer2 threads
> ---
>
> Key: HDFS-13272
> URL: https://issues.apache.org/jira/browse/HDFS-13272
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13272-branch-2.000.patch
>
>
> In HDFS-7279, the Jetty server on the DataNode was hard-coded to use 10 
> threads. In addition to the possibility of this being too few threads, it is 
> much higher than necessary in resource constrained environments such as 
> MiniDFSCluster. To avoid compatibility issues, rather than using 
> {{HttpServer2#HTTP_MAX_THREADS}} directly, we can introduce a new 
> configuration for the DataNode's thread pool size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13272) DataNodeHttpServer to have configurable HttpServer2 threads

2018-04-23 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448993#comment-16448993
 ] 

Erik Krogen edited comment on HDFS-13272 at 4/23/18 10:50 PM:
--

Sounds good to me. I have patches put together for both branch-2 and trunk, but 
given that they do very different things (one is adding a new config, one is 
just reducing the value used), I have created a separate JIRA for the second: 
HDFS-13493. I have attached the branch-2 patch here. I marked the new config as 
{{@Deprecated}} given that it is removed in Hadoop 3.


was (Author: xkrogen):
Sounds good to me. I have patches put together for both branch-2 and trunk, but 
given that they do very different things (one is adding a new config, one is 
just reducing the value used), I have created a separate JIRA for the second: 
HDFS-13493. I have attached the branch-2 patch here.

> DataNodeHttpServer to have configurable HttpServer2 threads
> ---
>
> Key: HDFS-13272
> URL: https://issues.apache.org/jira/browse/HDFS-13272
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
>
> In HDFS-7279, the Jetty server on the DataNode was hard-coded to use 10 
> threads. In addition to the possibility of this being too few threads, it is 
> much higher than necessary in resource constrained environments such as 
> MiniDFSCluster. To avoid compatibility issues, rather than using 
> {{HttpServer2#HTTP_MAX_THREADS}} directly, we can introduce a new 
> configuration for the DataNode's thread pool size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13272) DataNodeHttpServer to have configurable HttpServer2 threads

2018-04-23 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448993#comment-16448993
 ] 

Erik Krogen commented on HDFS-13272:


Sounds good to me. I have patches put together for both branch-2 and trunk, but 
given that they do very different things (one is adding a new config, one is 
just reducing the value used), I have created a separate JIRA for the second: 
HDFS-13493. I have attached the branch-2 patch here.

> DataNodeHttpServer to have configurable HttpServer2 threads
> ---
>
> Key: HDFS-13272
> URL: https://issues.apache.org/jira/browse/HDFS-13272
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
>
> In HDFS-7279, the Jetty server on the DataNode was hard-coded to use 10 
> threads. In addition to the possibility of this being too few threads, it is 
> much higher than necessary in resource constrained environments such as 
> MiniDFSCluster. To avoid compatibility issues, rather than using 
> {{HttpServer2#HTTP_MAX_THREADS}} directly, we can introduce a new 
> configuration for the DataNode's thread pool size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13493) Reduce the HttpServer2 thread count on DataNodes

2018-04-23 Thread Erik Krogen (JIRA)
Erik Krogen created HDFS-13493:
--

 Summary: Reduce the HttpServer2 thread count on DataNodes
 Key: HDFS-13493
 URL: https://issues.apache.org/jira/browse/HDFS-13493
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Erik Krogen
Assignee: Erik Krogen


Given that HFTP was removed in Hadoop 3 and WebHDFS is handled via Netty, the 
HttpServer2 instance within the DataNode is only used for very basic tasks such 
as the web UI. Thus we can safely reduce the thread count used here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13489) Get base snapshotable path if exists for a given path

2018-04-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448988#comment-16448988
 ] 

genericqa commented on HDFS-13489:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  3s{color} | {color:orange} hadoop-hdfs-project: The patch generated 52 new 
+ 402 unchanged - 0 fixed = 454 total (was 402) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
29s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
15s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}205m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13489 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920325/HDFS-13489.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux c998af603f31 3.13.0-139-generic 

[jira] [Commented] (HDFS-13336) Test cases of TestWriteToReplica failed in windows

2018-04-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448977#comment-16448977
 ] 

Íñigo Goiri commented on HDFS-13336:


Thanks, [~surmountian].
For the record, the daily Windows build is failing with the same as you can see 
[here|https://builds.apache.org/job/hadoop-trunk-win/443/testReport/org.apache.hadoop.hdfs.server.datanode.fsdataset.impl/TestWriteToReplica/].

> Test cases of TestWriteToReplica failed in windows
> --
>
> Key: HDFS-13336
> URL: https://issues.apache.org/jira/browse/HDFS-13336
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13336.000.patch, HDFS-13336.001.patch, 
> HDFS-13336.002.patch, HDFS-13336.003.patch
>
>
> Test cases of TestWriteToReplica failed in windows with errors like:
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Error Details
> Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Stack Trace
> java.io.IOException: Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1011)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:932)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:864)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:497) at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:456) 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica.testAppend(TestWriteToReplica.java:89)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-04-23 Thread Plamen Jeliazkov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448970#comment-16448970
 ] 

Plamen Jeliazkov commented on HDFS-13399:
-

Yeah I tried to remove the AlignmentContext from {{createNonHAProxy}} just now 
and I forgot that eventually the HAProxyFactory implementations will call 
{{createNonHAProxy}}, which is why I had to end up adding it.

I will modify {{NameNodeProxiesClient.createProxyWithClientProtocol}} and my 
unit tests accordingly.

> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch, 
> HDFS-13399-HDFS-12943.001.patch, HDFS-13399-HDFS-12943.002.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13492) Limit httpfs binds to certain IP addresses in branch-2

2018-04-23 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13492:
---
Status: Patch Available  (was: Open)

> Limit httpfs binds to certain IP addresses in branch-2
> --
>
> Key: HDFS-13492
> URL: https://issues.apache.org/jira/browse/HDFS-13492
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-13492.branch-2.001.patch
>
>
> Currently httpfs binds to all IP addresses of the host by default. Some 
> operators want to limit httpfs to accept only local connections.
> We should provide that option, and it's pretty doable in Hadoop 2.x.
> Note that httpfs underlying implementation changed in Hadoop 3, and the Jetty 
> based httpfs implementation already support that I believe.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13492) Limit httpfs binds to certain IP addresses in branch-2

2018-04-23 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448964#comment-16448964
 ] 

Wei-Chiu Chuang commented on HDFS-13492:


Rev001:
 Updated ssl-server.xml and server.xml:
{code:java}
https://tomcat.apache.org/tomcat-6.0-doc/config/http.html] but we didn't add 
this parameter before.

For httpfs server, httpfs.http.hostname comes from environment variable 
HTTPFS_HTTP_HOSTNAME. After this parameter is passed in 
(HTTPFS_HTTP_HOSTNAME=127.0.0.1), only local connection is accepted.

Tested successfully on a CDH5.13.1 cluster.

No unit test attached, because it is Tomcat configuration change and unittest 
doesn't help much.

> Limit httpfs binds to certain IP addresses in branch-2
> --
>
> Key: HDFS-13492
> URL: https://issues.apache.org/jira/browse/HDFS-13492
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-13492.branch-2.001.patch
>
>
> Currently httpfs binds to all IP addresses of the host by default. Some 
> operators want to limit httpfs to accept only local connections.
> We should provide that option, and it's pretty doable in Hadoop 2.x.
> Note that httpfs underlying implementation changed in Hadoop 3, and the Jetty 
> based httpfs implementation already support that I believe.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13336) Test cases of TestWriteToReplica failed in windows

2018-04-23 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448957#comment-16448957
 ] 

Xiao Liang commented on HDFS-13336:
---

The test result without [^HDFS-13336.003.patch] in Windows is like:

{color:#d04437}[INFO] Results:{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[ERROR] Errors:{color}
{color:#d04437}[ERROR] TestWriteToReplica.testAppend:88 » IO Could not fully 
delete D:\Git\MT\OSSHado...{color}
{color:#d04437}[ERROR] TestWriteToReplica.testClose:66 » IO Could not fully 
delete D:\Git\MT\OSSHadoo...{color}
{color:#d04437}[ERROR] 
TestWriteToReplica.testReplicaMapAfterDatanodeRestart:512 » IO Could not 
fully...{color}
{color:#d04437}[ERROR] TestWriteToReplica.testWriteToRbw:108 » IO Could not 
fully delete D:\Git\MT\OS...{color}
{color:#d04437}[ERROR] TestWriteToReplica.testWriteToTemporary:128 » IO Could 
not fully delete D:\Git...{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[ERROR] Tests run: 6, Failures: 0, Errors: 5, Skipped: 0{color}

And with the patch it is:

{color:#14892c}[INFO] 
---{color}
{color:#14892c}[INFO] T E S T S{color}
{color:#14892c}[INFO] 
---{color}
{color:#14892c}[INFO] Running 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica{color}
{color:#14892c}[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time 
elapsed: 15.151 s - in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica{color}
{color:#14892c}[INFO]{color}
{color:#14892c}[INFO] Results:{color}
{color:#14892c}[INFO]{color}
{color:#14892c}[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0{color}

> Test cases of TestWriteToReplica failed in windows
> --
>
> Key: HDFS-13336
> URL: https://issues.apache.org/jira/browse/HDFS-13336
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13336.000.patch, HDFS-13336.001.patch, 
> HDFS-13336.002.patch, HDFS-13336.003.patch
>
>
> Test cases of TestWriteToReplica failed in windows with errors like:
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Error Details
> Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Stack Trace
> java.io.IOException: Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1011)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:932)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:864)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:497) at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:456) 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica.testAppend(TestWriteToReplica.java:89)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>  at 
> 

[jira] [Updated] (HDFS-13492) Limit httpfs binds to certain IP addresses in branch-2

2018-04-23 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13492:
---
Attachment: HDFS-13492.branch-2.001.patch

> Limit httpfs binds to certain IP addresses in branch-2
> --
>
> Key: HDFS-13492
> URL: https://issues.apache.org/jira/browse/HDFS-13492
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-13492.branch-2.001.patch
>
>
> Currently httpfs binds to all IP addresses of the host by default. Some 
> operators want to limit httpfs to accept only local connections.
> We should provide that option, and it's pretty doable in Hadoop 2.x.
> Note that httpfs underlying implementation changed in Hadoop 3, and the Jetty 
> based httpfs implementation already support that I believe.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13492) Limit httpfs binds to certain IP addresses in branch-2

2018-04-23 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-13492:
--

 Summary: Limit httpfs binds to certain IP addresses in branch-2
 Key: HDFS-13492
 URL: https://issues.apache.org/jira/browse/HDFS-13492
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: httpfs
Affects Versions: 2.6.0
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


Currently httpfs binds to all IP addresses of the host by default. Some 
operators want to limit httpfs to accept only local connections.

We should provide that option, and it's pretty doable in Hadoop 2.x.

Note that httpfs underlying implementation changed in Hadoop 3, and the Jetty 
based httpfs implementation already support that I believe.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-04-23 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448937#comment-16448937
 ] 

Konstantin Shvachko commented on HDFS-13399:


Your proposal to change 
{{NameNodeProxiesClient.createProxyWithClientProtocol()}} makes sense to me.
I see how end up adding the parameter to {{createNonHAProxy()}} now.

> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch, 
> HDFS-13399-HDFS-12943.001.patch, HDFS-13399-HDFS-12943.002.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb

2018-04-23 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448932#comment-16448932
 ] 

Konstantin Shvachko commented on HDFS-13356:


+1 looks good.

> Balancer:Set default value of minBlockSize to 10mb 
> ---
>
> Key: HDFS-13356
> URL: https://issues.apache.org/jira/browse/HDFS-13356
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 2.7.5
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: balancer, upgrades
> Attachments: HDFS-13356.00.patch, HDFS-13356.01.patch, 
> HDFS-13356.02.patch
>
>
>  It seems we can run into a problem while a rolling upgrade with this.
> The Balancer is upgraded after NameNodes, so once NN is upgraded it will 
> expect {{minBlockSize}} parameter in {{getBlocks()}}. The Balancer cannot 
> send it yet, so NN will use the default, which you set to 0. So NN will start 
> unexpectedly sending small blocks to the Balancer. So we should
>  # either change the default in protobuf to 10 MB
>  # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use 
> the configuration variable 
> {{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}.
> If you agree, we should create a follow up jira. I wanted to backport this 
> down the chain of branches, but this upgrade scenario is stopping me.
> [~shv] commented this in  HDFS-13222 jira.
> https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb

2018-04-23 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-13356:
---
Description: 
 It seems we can run into a problem while a rolling upgrade with this.
The Balancer is upgraded after NameNodes, so once NN is upgraded it will expect 
{{minBlockSize}} parameter in {{getBlocks()}}. The Balancer cannot send it yet, 
so NN will use the default, which you set to 0. So NN will start unexpectedly 
sending small blocks to the Balancer. So we should
 # either change the default in protobuf to 10 MB
 # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use 
the configuration variable 
{{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}.

If you agree, we should create a follow up jira. I wanted to backport this down 
the chain of branches, but this upgrade scenario is stopping me.

[~shv] commented this in  HDFS-13222 jira.

https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855

  was:
 It seems we can run into a problem while a rolling upgrade with this.
The Balancer is upgraded after NameNodes, so once NN is upgraded it will expect 
{{minBlockSize}}parameter in {{getBlocks()}}. The Balancer cannot send it yet, 
so NN will use the default, which you set to 0. So NN will start unexpectedly 
sending small blocks to the Balancer. So we should
 # either change the default in protobuf to 10 MB
 # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use 
the configuration variable 
{{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}.

If you agree, we should create a follow up jira. I wanted to backport this down 
the chain of branches, but this upgrade scenario is stopping me.

[~shv]] commented this in  HDFS-13222 jira.

https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855


> Balancer:Set default value of minBlockSize to 10mb 
> ---
>
> Key: HDFS-13356
> URL: https://issues.apache.org/jira/browse/HDFS-13356
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 2.7.5
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: balancer, upgrades
> Attachments: HDFS-13356.00.patch, HDFS-13356.01.patch, 
> HDFS-13356.02.patch
>
>
>  It seems we can run into a problem while a rolling upgrade with this.
> The Balancer is upgraded after NameNodes, so once NN is upgraded it will 
> expect {{minBlockSize}} parameter in {{getBlocks()}}. The Balancer cannot 
> send it yet, so NN will use the default, which you set to 0. So NN will start 
> unexpectedly sending small blocks to the Balancer. So we should
>  # either change the default in protobuf to 10 MB
>  # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use 
> the configuration variable 
> {{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}.
> If you agree, we should create a follow up jira. I wanted to backport this 
> down the chain of branches, but this upgrade scenario is stopping me.
> [~shv] commented this in  HDFS-13222 jira.
> https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13356) Balancer:Set default value of minBlockSize to 10mb

2018-04-23 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-13356:
---
Description: 
 It seems we can run into a problem while a rolling upgrade with this.
The Balancer is upgraded after NameNodes, so once NN is upgraded it will expect 
{{minBlockSize}}parameter in {{getBlocks()}}. The Balancer cannot send it yet, 
so NN will use the default, which you set to 0. So NN will start unexpectedly 
sending small blocks to the Balancer. So we should
 # either change the default in protobuf to 10 MB
 # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use 
the configuration variable 
{{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}.

If you agree, we should create a follow up jira. I wanted to backport this down 
the chain of branches, but this upgrade scenario is stopping me.

[~shv]] commented this in  HDFS-13222 jira.

https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855

  was:
 It seems we can run into a problem while a rolling upgrade with this.
The Balancer is upgraded after NameNodes, so once NN is upgraded it will expect 
{{minBlockSize}}parameter in {{getBlocks()}}. The Balancer cannot send it yet, 
so NN will use the default, which you set to 0. So NN will start unexpectedly 
sending small blocks to the Balancer. So we should
 # either change the default in protobuf to 10 MB
 # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use 
the configuration variable 
{{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}.

If you agree, we should create a follow up jira. I wanted to backport this down 
the chain of branches, but this upgrade scenario is stopping me.

[~barnaul] commented this in  HDFS-13222 jira.

https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855


> Balancer:Set default value of minBlockSize to 10mb 
> ---
>
> Key: HDFS-13356
> URL: https://issues.apache.org/jira/browse/HDFS-13356
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 2.7.5
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: balancer, upgrades
> Attachments: HDFS-13356.00.patch, HDFS-13356.01.patch, 
> HDFS-13356.02.patch
>
>
>  It seems we can run into a problem while a rolling upgrade with this.
> The Balancer is upgraded after NameNodes, so once NN is upgraded it will 
> expect {{minBlockSize}}parameter in {{getBlocks()}}. The Balancer cannot send 
> it yet, so NN will use the default, which you set to 0. So NN will start 
> unexpectedly sending small blocks to the Balancer. So we should
>  # either change the default in protobuf to 10 MB
>  # or treat {{minBlockSize == 0}} in {{NameNodeRpcServer}} as a signal to use 
> the configuration variable 
> {{DFSConfigKeys.DFS_BALANCER_GETBLOCKS_MIN_BLOCK_SIZE_KEY}}.
> If you agree, we should create a follow up jira. I wanted to backport this 
> down the chain of branches, but this upgrade scenario is stopping me.
> [~shv]] commented this in  HDFS-13222 jira.
> https://issues.apache.org/jira/browse/HDFS-13222?focusedCommentId=16414855=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16414855



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13336) Test cases of TestWriteToReplica failed in windows

2018-04-23 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448899#comment-16448899
 ] 

Xiao Liang commented on HDFS-13336:
---

Thanks [~elgoiri] and [~chris.douglas] for the fix of 
https://issues.apache.org/jira/browse/HDFS-13408 , I have updated the patch 
[^HDFS-13336.003.patch] basing on it, which should fix the test failures for 
windodws.

> Test cases of TestWriteToReplica failed in windows
> --
>
> Key: HDFS-13336
> URL: https://issues.apache.org/jira/browse/HDFS-13336
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13336.000.patch, HDFS-13336.001.patch, 
> HDFS-13336.002.patch, HDFS-13336.003.patch
>
>
> Test cases of TestWriteToReplica failed in windows with errors like:
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Error Details
> Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Stack Trace
> java.io.IOException: Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1011)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:932)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:864)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:497) at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:456) 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica.testAppend(TestWriteToReplica.java:89)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13336) Test cases of TestWriteToReplica failed in windows

2018-04-23 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448899#comment-16448899
 ] 

Xiao Liang edited comment on HDFS-13336 at 4/23/18 9:38 PM:


Thanks [~elgoiri] and [~chris.douglas] for the fix of 
https://issues.apache.org/jira/browse/HDFS-13408 , I have updated the patch 
[^HDFS-13336.003.patch] basing on it, which should fix the test failures for 
windows.


was (Author: surmountian):
Thanks [~elgoiri] and [~chris.douglas] for the fix of 
https://issues.apache.org/jira/browse/HDFS-13408 , I have updated the patch 
[^HDFS-13336.003.patch] basing on it, which should fix the test failures for 
windodws.

> Test cases of TestWriteToReplica failed in windows
> --
>
> Key: HDFS-13336
> URL: https://issues.apache.org/jira/browse/HDFS-13336
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13336.000.patch, HDFS-13336.001.patch, 
> HDFS-13336.002.patch, HDFS-13336.003.patch
>
>
> Test cases of TestWriteToReplica failed in windows with errors like:
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Error Details
> Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Stack Trace
> java.io.IOException: Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1011)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:932)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:864)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:497) at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:456) 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica.testAppend(TestWriteToReplica.java:89)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-04-23 Thread BELUGA BEHR (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448894#comment-16448894
 ] 

BELUGA BEHR commented on HDFS-13448:


Team,

Please consider my patch for introduction into the project (as-is, without the 
configuration).  I'm not a fan of having yet another configuration that almost 
no one will touch.  If someone feels strongly about it, it can be added later.  
Thanks.

> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 2.9.0, 3.0.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13448.1.patch, HDFS-13448.2.patch, 
> HDFS-13448.3.patch, HDFS-13448.4.patch, HDFS-13448.5.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13336) Test cases of TestWriteToReplica failed in windows

2018-04-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448878#comment-16448878
 ] 

Íñigo Goiri commented on HDFS-13336:


In addition to the Yetus run, [~surmountian] can you post the report for the 
unit tests passing on Windows?

> Test cases of TestWriteToReplica failed in windows
> --
>
> Key: HDFS-13336
> URL: https://issues.apache.org/jira/browse/HDFS-13336
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13336.000.patch, HDFS-13336.001.patch, 
> HDFS-13336.002.patch, HDFS-13336.003.patch
>
>
> Test cases of TestWriteToReplica failed in windows with errors like:
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Error Details
> Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Stack Trace
> java.io.IOException: Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1011)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:932)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:864)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:497) at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:456) 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica.testAppend(TestWriteToReplica.java:89)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13336) Test cases of TestWriteToReplica failed in windows

2018-04-23 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13336:
--
Attachment: HDFS-13336.003.patch

> Test cases of TestWriteToReplica failed in windows
> --
>
> Key: HDFS-13336
> URL: https://issues.apache.org/jira/browse/HDFS-13336
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13336.000.patch, HDFS-13336.001.patch, 
> HDFS-13336.002.patch, HDFS-13336.003.patch
>
>
> Test cases of TestWriteToReplica failed in windows with errors like:
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Error Details
> Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
> h4. 
> !https://builds.apache.org/static/fc5100d0/images/16x16/document_delete.png!  
> Stack Trace
> java.io.IOException: Could not fully delete 
> F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\1\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1011)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:932)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:864)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:497) at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:456) 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica.testAppend(TestWriteToReplica.java:89)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13283) Percentage based Reserved Space Calculation for DataNode

2018-04-23 Thread Lukas Majercak (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448846#comment-16448846
 ] 

Lukas Majercak commented on HDFS-13283:
---

Added patch007, hope this one works.

> Percentage based Reserved Space Calculation for DataNode
> 
>
> Key: HDFS-13283
> URL: https://issues.apache.org/jira/browse/HDFS-13283
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13283.000.patch, HDFS-13283.001.patch, 
> HDFS-13283.002.patch, HDFS-13283.003.patch, HDFS-13283.004.patch, 
> HDFS-13283.005.patch, HDFS-13283.006.patch, HDFS-13283.007.patch
>
>
> Currently, the only way to configure reserved disk space for non-HDFS data on 
> a DataNode is a constant value via {{dfs.datanode.du.reserved}}. This can be 
> an issue in non-heterogeneous clusters where size of DNs can differ. The 
> proposed solution is to allow percentage based configuration (and their 
> combination):
>  # ABSOLUTE
>  ** based on absolute number of reserved space
>  # PERCENTAGE
>  ** based on percentage of total capacity in the storage
>  # CONSERVATIVE
>  ** calculates both of the above and takes the one that will yield more 
> reserved space
>  # AGGRESSIVE
>  ** calculates 1. 2. and takes the one that will yield less reserved space
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13283) Percentage based Reserved Space Calculation for DataNode

2018-04-23 Thread Lukas Majercak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-13283:
--
Attachment: HDFS-13283.007.patch

> Percentage based Reserved Space Calculation for DataNode
> 
>
> Key: HDFS-13283
> URL: https://issues.apache.org/jira/browse/HDFS-13283
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13283.000.patch, HDFS-13283.001.patch, 
> HDFS-13283.002.patch, HDFS-13283.003.patch, HDFS-13283.004.patch, 
> HDFS-13283.005.patch, HDFS-13283.006.patch, HDFS-13283.007.patch
>
>
> Currently, the only way to configure reserved disk space for non-HDFS data on 
> a DataNode is a constant value via {{dfs.datanode.du.reserved}}. This can be 
> an issue in non-heterogeneous clusters where size of DNs can differ. The 
> proposed solution is to allow percentage based configuration (and their 
> combination):
>  # ABSOLUTE
>  ** based on absolute number of reserved space
>  # PERCENTAGE
>  ** based on percentage of total capacity in the storage
>  # CONSERVATIVE
>  ** calculates both of the above and takes the one that will yield more 
> reserved space
>  # AGGRESSIVE
>  ** calculates 1. 2. and takes the one that will yield less reserved space
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13468) Add erasure coding metrics into ReadStatistics

2018-04-23 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448826#comment-16448826
 ] 

Lei (Eddy) Xu commented on HDFS-13468:
--

The failure is not relevant, [~xiaochen] could you take a look.

> Add erasure coding metrics into ReadStatistics
> --
>
> Key: HDFS-13468
> URL: https://issues.apache.org/jira/browse/HDFS-13468
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.1.0, 3.0.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Major
> Attachments: HDFS-13468.00.patch, HDFS-13468.01.patch
>
>
> Expose Erasure Coding related metrics for InputStream in ReadStatistics. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13326) RBF: Improve the interfaces to modify and view mount tables

2018-04-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448810#comment-16448810
 ] 

Íñigo Goiri commented on HDFS-13326:


[^HDFS-13326.002.patch] LGTM.
This can go all the way to 2.9 as it does not break compatibility.

> RBF: Improve the interfaces to modify and view mount tables
> ---
>
> Key: HDFS-13326
> URL: https://issues.apache.org/jira/browse/HDFS-13326
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Gang Li
>Priority: Minor
> Attachments: HDFS-13326.000.patch, HDFS-13326.001.patch, 
> HDFS-13326.002.patch
>
>
> From DFSRouterAdmin cmd, currently the update logic is implemented inside add 
> operation, where it has some limitation (e.g. cannot update "readonly" or 
> removing a destination).  Given the RPC alreadys separate add and update 
> operations, it would be better to do the same in cmd level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13468) Add erasure coding metrics into ReadStatistics

2018-04-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448798#comment-16448798
 ] 

genericqa commented on HDFS-13468:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
46s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
33s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}180m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13468 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920107/HDFS-13468.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c3111e6547e6 3.13.0-141-generic #190-Ubuntu SMP Fri Jan 19 
12:52:38 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c533c77 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Commented] (HDFS-13408) MiniDFSCluster to support being built on randomized base directory

2018-04-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448781#comment-16448781
 ] 

Íñigo Goiri commented on HDFS-13408:


Thanks [~chris.douglas] for the review and the commit!
Let's start cutting down the failed unit tests on Windows.

> MiniDFSCluster to support being built on randomized base directory
> --
>
> Key: HDFS-13408
> URL: https://issues.apache.org/jira/browse/HDFS-13408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13408.000.patch, HDFS-13408.001.patch, 
> HDFS-13408.002.patch, HDFS-13408.003.patch, HDFS-13408.004.patch
>
>
> Generated files of MiniDFSCluster during test are not properly cleaned in 
> Windows, which fails all subsequent test cases using the same default 
> directory (Windows does not allow other processes to delete them). By 
> migrating to randomized base directories, the conflict of test path of test 
> cases will be avoided, even if they are running at the same time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13326) RBF: Improve the interfaces to modify and view mount tables

2018-04-23 Thread Gang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448740#comment-16448740
 ] 

Gang Li edited comment on HDFS-13326 at 4/23/18 7:29 PM:
-

Hi, guys,  i just uploaded patch  [^HDFS-13326.002.patch] which does not break 
the compatibility.  When the new JIRA opens, i will upload the one which 
removes update functionality from add cmd.


was (Author: gangli2384):
Hi, guys,  i just uploaded two patches.  [^HDFS-13326.002.patch] does not break 
the compatibility. [^HDFS-13326.003.patch] removed the update functionality 
from the add cmd. 

> RBF: Improve the interfaces to modify and view mount tables
> ---
>
> Key: HDFS-13326
> URL: https://issues.apache.org/jira/browse/HDFS-13326
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Gang Li
>Priority: Minor
> Attachments: HDFS-13326.000.patch, HDFS-13326.001.patch, 
> HDFS-13326.002.patch
>
>
> From DFSRouterAdmin cmd, currently the update logic is implemented inside add 
> operation, where it has some limitation (e.g. cannot update "readonly" or 
> removing a destination).  Given the RPC alreadys separate add and update 
> operations, it would be better to do the same in cmd level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13326) RBF: Improve the interfaces to modify and view mount tables

2018-04-23 Thread Gang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gang Li updated HDFS-13326:
---
Attachment: (was: HDFS-13326.003.patch)

> RBF: Improve the interfaces to modify and view mount tables
> ---
>
> Key: HDFS-13326
> URL: https://issues.apache.org/jira/browse/HDFS-13326
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Gang Li
>Priority: Minor
> Attachments: HDFS-13326.000.patch, HDFS-13326.001.patch, 
> HDFS-13326.002.patch
>
>
> From DFSRouterAdmin cmd, currently the update logic is implemented inside add 
> operation, where it has some limitation (e.g. cannot update "readonly" or 
> removing a destination).  Given the RPC alreadys separate add and update 
> operations, it would be better to do the same in cmd level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13484) RBF: Disable Nameservices from the federation

2018-04-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448741#comment-16448741
 ] 

genericqa commented on HDFS-13484:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
36s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13484 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920317/HDFS-13484.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6e54915c9e52 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f411de6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24042/testReport/ |
| Max. process+thread count | 1334 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24042/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Disable Nameservices from the federation
> -
>
> Key: HDFS-13484
> URL: 

[jira] [Commented] (HDFS-13326) RBF: Improve the interfaces to modify and view mount tables

2018-04-23 Thread Gang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448740#comment-16448740
 ] 

Gang Li commented on HDFS-13326:


Hi, guys,  i just uploaded two patches.  [^HDFS-13326.002.patch] does not break 
the compatibility. [^HDFS-13326.003.patch] removed the update functionality 
from the add cmd. 

> RBF: Improve the interfaces to modify and view mount tables
> ---
>
> Key: HDFS-13326
> URL: https://issues.apache.org/jira/browse/HDFS-13326
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Gang Li
>Priority: Minor
> Attachments: HDFS-13326.000.patch, HDFS-13326.001.patch, 
> HDFS-13326.002.patch, HDFS-13326.003.patch
>
>
> From DFSRouterAdmin cmd, currently the update logic is implemented inside add 
> operation, where it has some limitation (e.g. cannot update "readonly" or 
> removing a destination).  Given the RPC alreadys separate add and update 
> operations, it would be better to do the same in cmd level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13326) RBF: Improve the interfaces to modify and view mount tables

2018-04-23 Thread Gang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gang Li updated HDFS-13326:
---
Attachment: HDFS-13326.003.patch

> RBF: Improve the interfaces to modify and view mount tables
> ---
>
> Key: HDFS-13326
> URL: https://issues.apache.org/jira/browse/HDFS-13326
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Gang Li
>Priority: Minor
> Attachments: HDFS-13326.000.patch, HDFS-13326.001.patch, 
> HDFS-13326.002.patch, HDFS-13326.003.patch
>
>
> From DFSRouterAdmin cmd, currently the update logic is implemented inside add 
> operation, where it has some limitation (e.g. cannot update "readonly" or 
> removing a destination).  Given the RPC alreadys separate add and update 
> operations, it would be better to do the same in cmd level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13326) RBF: Improve the interfaces to modify and view mount tables

2018-04-23 Thread Gang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gang Li updated HDFS-13326:
---
Attachment: HDFS-13326.002.patch

> RBF: Improve the interfaces to modify and view mount tables
> ---
>
> Key: HDFS-13326
> URL: https://issues.apache.org/jira/browse/HDFS-13326
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Gang Li
>Priority: Minor
> Attachments: HDFS-13326.000.patch, HDFS-13326.001.patch, 
> HDFS-13326.002.patch
>
>
> From DFSRouterAdmin cmd, currently the update logic is implemented inside add 
> operation, where it has some limitation (e.g. cannot update "readonly" or 
> removing a destination).  Given the RPC alreadys separate add and update 
> operations, it would be better to do the same in cmd level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13489) Get base snapshotable path if exists for a given path

2018-04-23 Thread Harkrishn Patro (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448729#comment-16448729
 ] 

Harkrishn Patro commented on HDFS-13489:


Thanks [~shashikant] for reviewing and commenting.

Handling of files was missing. Patch [^HDFS-13489.002.patch] takes care of 
handling both directories and files.

> Get base snapshotable path if exists for a given path
> -
>
> Key: HDFS-13489
> URL: https://issues.apache.org/jira/browse/HDFS-13489
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs
>Reporter: Harkrishn Patro
>Assignee: Harkrishn Patro
>Priority: Major
> Attachments: HDFS-13489.001.patch, HDFS-13489.002.patch
>
>
> Currently, hdfs only lists the snapshotable paths in the filesystem. This 
> feature would add the functionality of figuring out if a given path is 
> snapshotable or not. If yes, it would return the base snapshotable path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13489) Get base snapshotable path if exists for a given path

2018-04-23 Thread Harkrishn Patro (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harkrishn Patro updated HDFS-13489:
---
Attachment: HDFS-13489.002.patch

> Get base snapshotable path if exists for a given path
> -
>
> Key: HDFS-13489
> URL: https://issues.apache.org/jira/browse/HDFS-13489
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs
>Reporter: Harkrishn Patro
>Assignee: Harkrishn Patro
>Priority: Major
> Attachments: HDFS-13489.001.patch, HDFS-13489.002.patch
>
>
> Currently, hdfs only lists the snapshotable paths in the filesystem. This 
> feature would add the functionality of figuring out if a given path is 
> snapshotable or not. If yes, it would return the base snapshotable path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13283) Percentage based Reserved Space Calculation for DataNode

2018-04-23 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448701#comment-16448701
 ] 

Chris Douglas commented on HDFS-13283:
--

[~lukmajercak], could you regenerate the patch? +1 on the changes

> Percentage based Reserved Space Calculation for DataNode
> 
>
> Key: HDFS-13283
> URL: https://issues.apache.org/jira/browse/HDFS-13283
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13283.000.patch, HDFS-13283.001.patch, 
> HDFS-13283.002.patch, HDFS-13283.003.patch, HDFS-13283.004.patch, 
> HDFS-13283.005.patch, HDFS-13283.006.patch
>
>
> Currently, the only way to configure reserved disk space for non-HDFS data on 
> a DataNode is a constant value via {{dfs.datanode.du.reserved}}. This can be 
> an issue in non-heterogeneous clusters where size of DNs can differ. The 
> proposed solution is to allow percentage based configuration (and their 
> combination):
>  # ABSOLUTE
>  ** based on absolute number of reserved space
>  # PERCENTAGE
>  ** based on percentage of total capacity in the storage
>  # CONSERVATIVE
>  ** calculates both of the above and takes the one that will yield more 
> reserved space
>  # AGGRESSIVE
>  ** calculates 1. 2. and takes the one that will yield less reserved space
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13408) MiniDFSCluster to support being built on randomized base directory

2018-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448680#comment-16448680
 ] 

Hudson commented on HDFS-13408:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14048 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14048/])
HDFS-13408. MiniDFSCluster to support being built on randomized base (cdouglas: 
rev f411de6a79a0a87f03c09366cfe7a7d0726ed932)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMiniDFSCluster.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java


> MiniDFSCluster to support being built on randomized base directory
> --
>
> Key: HDFS-13408
> URL: https://issues.apache.org/jira/browse/HDFS-13408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13408.000.patch, HDFS-13408.001.patch, 
> HDFS-13408.002.patch, HDFS-13408.003.patch, HDFS-13408.004.patch
>
>
> Generated files of MiniDFSCluster during test are not properly cleaned in 
> Windows, which fails all subsequent test cases using the same default 
> directory (Windows does not allow other processes to delete them). By 
> migrating to randomized base directories, the conflict of test path of test 
> cases will be avoided, even if they are running at the same time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13408) MiniDFSCluster to support being built on randomized base directory

2018-04-23 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-13408:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.3
   2.9.2
   3.1.1
   3.2.0
   2.10.0
   Status: Resolved  (was: Patch Available)

I committed this. Thanks [~surmountian]

> MiniDFSCluster to support being built on randomized base directory
> --
>
> Key: HDFS-13408
> URL: https://issues.apache.org/jira/browse/HDFS-13408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13408.000.patch, HDFS-13408.001.patch, 
> HDFS-13408.002.patch, HDFS-13408.003.patch, HDFS-13408.004.patch
>
>
> Generated files of MiniDFSCluster during test are not properly cleaned in 
> Windows, which fails all subsequent test cases using the same default 
> directory (Windows does not allow other processes to delete them). By 
> migrating to randomized base directories, the conflict of test path of test 
> cases will be avoided, even if they are running at the same time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13484) RBF: Disable Nameservices from the federation

2018-04-23 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13484:
---
Attachment: HDFS-13484.007.patch

> RBF: Disable Nameservices from the federation
> -
>
> Key: HDFS-13484
> URL: https://issues.apache.org/jira/browse/HDFS-13484
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13484.000.patch, HDFS-13484.001.patch, 
> HDFS-13484.002.patch, HDFS-13484.003.patch, HDFS-13484.004.patch, 
> HDFS-13484.005.patch, HDFS-13484.006.patch, HDFS-13484.007.patch
>
>
> HDFS-13478 introduced the Decommission store. We should disable the access to 
> decommissioned subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-04-23 Thread Plamen Jeliazkov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448619#comment-16448619
 ] 

Plamen Jeliazkov commented on HDFS-13399:
-

Thanks for the review, [~shv].

Regarding (1), those changes are pretty easy and I generally agree. I think 
this was the sort of approach I was looking towards when I asked about 
configuration; minimizing change of behaviors to only when necessary.

Regarding (2), assuming we implement (1), then in the case of most 
instantiations of AbstractNNFailoverProxyProvider we will simply pass null for 
an {{AlignmentContext}}. Only if DFSClient's initialization will end up calling 
{{NameNodeProxiesClient.createHAProxy}} will we set the {{AlignmentContext}}. 
Is this acceptable? It will be difficult otherwise to pass an 
{{AlignmentContext}} into {{Client}} itself.

We can change NameNodeProxiesClient.createProxyWithClientProtocol like so:
{code:java}
if (failoverProxyProvider == null) {
  ...normal case...
} else {
  failoverProxyProvider.setAlignmentContext(alignmentContext);
  return createHAProxy(conf, nameNodeUri, ClientProtocol.class,
failoverProxyProvider);
}{code}

> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch, 
> HDFS-13399-HDFS-12943.001.patch, HDFS-13399-HDFS-12943.002.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13484) RBF: Disable Nameservices from the federation

2018-04-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448603#comment-16448603
 ] 

genericqa commented on HDFS-13484:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 32s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
19s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13484 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920305/HDFS-13484.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux acb794b9b29c 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 83e5f25 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24039/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24039/testReport/ |
| Max. process+thread count | 1412 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24039/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Commented] (HDFS-13217) Log audit event only used last EC policy name when add multiple policies from file

2018-04-23 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448580#comment-16448580
 ] 

Xiao Chen commented on HDFS-13217:
--

+1 pending checkstyle fix. Thanks [~liaoyuxiangqin].

> Log audit event only used last EC policy name when add multiple policies from 
> file 
> ---
>
> Key: HDFS-13217
> URL: https://issues.apache.org/jira/browse/HDFS-13217
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: liaoyuxiangqin
>Assignee: liaoyuxiangqin
>Priority: Major
> Attachments: HDFS-13217.001.patch, HDFS-13217.002.patch, 
> HDFS-13217.003.patch, HDFS-13217.004.patch
>
>
> When i read the addErasureCodingPolicies() of FSNamesystem class in namenode, 
> i found the following code only used last ec policy name for  logAuditEvent, 
> i think this audit log can't track whole policies for the add multiple 
> erasure coding policies to the ErasureCodingPolicyManager. Thanks.
> {code:java|title=FSNamesystem.java|borderStyle=solid}
> try {
>   checkOperation(OperationCategory.WRITE);
>   checkNameNodeSafeMode("Cannot add erasure coding policy");
>   for (ErasureCodingPolicy policy : policies) {
> try {
>   ErasureCodingPolicy newPolicy =
>   FSDirErasureCodingOp.addErasureCodingPolicy(this, policy,
>   logRetryCache);
>   addECPolicyName = newPolicy.getName();
>   responses.add(new AddErasureCodingPolicyResponse(newPolicy));
> } catch (HadoopIllegalArgumentException e) {
>   responses.add(new AddErasureCodingPolicyResponse(policy, e));
> }
>   }
>   success = true;
>   return responses.toArray(new AddErasureCodingPolicyResponse[0]);
> } finally {
>   writeUnlock(operationName);
>   if (success) {
> getEditLog().logSync();
>   }
>   logAuditEvent(success, operationName,addECPolicyName, null, null);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13369) FSCK Report broken with RequestHedgingProxyProvider

2018-04-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448559#comment-16448559
 ] 

genericqa commented on HDFS-13369:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-13369 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13369 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920290/HDFS-13369.003.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24041/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FSCK Report broken with RequestHedgingProxyProvider 
> 
>
> Key: HDFS-13369
> URL: https://issues.apache.org/jira/browse/HDFS-13369
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.3
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-13369.001.patch, HDFS-13369.002.patch, 
> HDFS-13369.003.patch
>
>
> Scenario:-
> 1.Configure the RequestHedgingProxy
> 2. write some files in file system
> 3. Take FSCK report for the above files
>  
> {noformat}
> bin> hdfs fsck /file1 -locations -files -blocks
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider$RequestHedgingInvocationHandler
>  cannot be cast to org.apache.hadoop.ipc.RpcInvocationHandler
> at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:626)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.getConnectionId(RetryInvocationHandler.java:438)
> at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:628)
> at org.apache.hadoop.ipc.RPC.getServerAddress(RPC.java:611)
> at org.apache.hadoop.hdfs.HAUtil.getAddressOfActive(HAUtil.java:263)
> at 
> org.apache.hadoop.hdfs.tools.DFSck.getCurrentNamenodeAddress(DFSck.java:257)
> at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:319)
> at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:156)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:153)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:152)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:385){noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13433) webhdfs requests can be routed incorrectly in federated cluster

2018-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448518#comment-16448518
 ] 

Hudson commented on HDFS-13433:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14047 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14047/])
HDFS-13433. webhdfs requests can be routed incorrectly in federated (arp: rev 
c533c770476254c27309daeb2b41c73dc70bf3f4)
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestClientNameNodeAddress.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeUtils.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java


> webhdfs requests can be routed incorrectly in federated cluster
> ---
>
> Key: HDFS-13433
> URL: https://issues.apache.org/jira/browse/HDFS-13433
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Critical
> Fix For: 3.1.1, 3.0.3
>
> Attachments: HDFS-13433.01.patch, HDFS-13433.02.patch, 
> HDFS-13433.03.patch, HDFS-13433.04.patch
>
>
> In the following HA+Federated setup with two nameservices ns1 and ns2:
> # ns1 -> namenodes nn1, nn2
> # ns2 -> namenodes nn3, nn4
> # fs.defaultFS is {{hdfs://ns1}}.
> A webhdfs request issued to nn3/nn4 will be routed to ns1. This is because 
> {{setClientNamenodeAddress}} initializes {{NameNode#clientNamenodeAddress}} 
> using fs.defaultFS before the config is overriden.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13490) RBF: Fix setSafeMode in the Router

2018-04-23 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448515#comment-16448515
 ] 

genericqa commented on HDFS-13490:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 16s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.fs.contract.router.web.TestRouterWebHDFSContractAppend |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13490 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920299/HDFS-13490.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9357c11ccbe3 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 83e5f25 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24038/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24038/testReport/ |
| Max. process+thread count | 984 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24038/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Commented] (HDFS-13369) FSCK Report broken with RequestHedgingProxyProvider

2018-04-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448491#comment-16448491
 ] 

Íñigo Goiri commented on HDFS-13369:


[~RANith], the current change in  [^HDFS-13369.003.patch] is pretty involved in 
the client, etc.
We are going to need somebody a little more familiar with this to review.

A few minor comments:
* Avoid Client#114
* Extra line after Client#138
* Extra line after TestHAFsck#51

> FSCK Report broken with RequestHedgingProxyProvider 
> 
>
> Key: HDFS-13369
> URL: https://issues.apache.org/jira/browse/HDFS-13369
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.3
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-13369.001.patch, HDFS-13369.002.patch, 
> HDFS-13369.003.patch
>
>
> Scenario:-
> 1.Configure the RequestHedgingProxy
> 2. write some files in file system
> 3. Take FSCK report for the above files
>  
> {noformat}
> bin> hdfs fsck /file1 -locations -files -blocks
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider$RequestHedgingInvocationHandler
>  cannot be cast to org.apache.hadoop.ipc.RpcInvocationHandler
> at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:626)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.getConnectionId(RetryInvocationHandler.java:438)
> at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:628)
> at org.apache.hadoop.ipc.RPC.getServerAddress(RPC.java:611)
> at org.apache.hadoop.hdfs.HAUtil.getAddressOfActive(HAUtil.java:263)
> at 
> org.apache.hadoop.hdfs.tools.DFSck.getCurrentNamenodeAddress(DFSck.java:257)
> at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:319)
> at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:156)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:153)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:152)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:385){noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13369) FSCK Report broken with RequestHedgingProxyProvider

2018-04-23 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13369:
---
Status: Patch Available  (was: Open)

> FSCK Report broken with RequestHedgingProxyProvider 
> 
>
> Key: HDFS-13369
> URL: https://issues.apache.org/jira/browse/HDFS-13369
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.3
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-13369.001.patch, HDFS-13369.002.patch, 
> HDFS-13369.003.patch
>
>
> Scenario:-
> 1.Configure the RequestHedgingProxy
> 2. write some files in file system
> 3. Take FSCK report for the above files
>  
> {noformat}
> bin> hdfs fsck /file1 -locations -files -blocks
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider$RequestHedgingInvocationHandler
>  cannot be cast to org.apache.hadoop.ipc.RpcInvocationHandler
> at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:626)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.getConnectionId(RetryInvocationHandler.java:438)
> at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:628)
> at org.apache.hadoop.ipc.RPC.getServerAddress(RPC.java:611)
> at org.apache.hadoop.hdfs.HAUtil.getAddressOfActive(HAUtil.java:263)
> at 
> org.apache.hadoop.hdfs.tools.DFSck.getCurrentNamenodeAddress(DFSck.java:257)
> at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:319)
> at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:156)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:153)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:152)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:385){noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13433) webhdfs requests can be routed incorrectly in federated cluster

2018-04-23 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-13433:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 3.0.3
  3.1.1
Target Version/s:   (was: 3.1.1, 3.0.3)
  Status: Resolved  (was: Patch Available)

I've committed this. Thank you all for the reviews and comments.

[~daryn], please let me know if you still see a concern with this.

> webhdfs requests can be routed incorrectly in federated cluster
> ---
>
> Key: HDFS-13433
> URL: https://issues.apache.org/jira/browse/HDFS-13433
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Critical
> Fix For: 3.1.1, 3.0.3
>
> Attachments: HDFS-13433.01.patch, HDFS-13433.02.patch, 
> HDFS-13433.03.patch, HDFS-13433.04.patch
>
>
> In the following HA+Federated setup with two nameservices ns1 and ns2:
> # ns1 -> namenodes nn1, nn2
> # ns2 -> namenodes nn3, nn4
> # fs.defaultFS is {{hdfs://ns1}}.
> A webhdfs request issued to nn3/nn4 will be routed to ns1. This is because 
> {{setClientNamenodeAddress}} initializes {{NameNode#clientNamenodeAddress}} 
> using fs.defaultFS before the config is overriden.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-04-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448477#comment-16448477
 ] 

Íñigo Goiri commented on HDFS-13443:


Thanks [~arshad.mohammad] for  [^HDFS-13443.003.patch].
A couple comments:
* Most of my confusion with what's local and remote comes form the fact that 
the methods for local and remote have the same name; for readability, I would 
make one of them different. Not sure if the local or the remote though.
* The new times in RBFConfigKeys should be defined as TimeUnit and we should 
use millis or seconds.
* 30 minutes for the connection seems a little high, 5 minutes (even 1) should 
be more than enough.
* I think I've seen something like getHostPortString in other places, is there 
a library we can use? Not sure StateStoreUtils is the place either as this is 
not the state store at all.
* Use {{new ArrayList<>()}} instead of {{new 
ArrayList()}}.
* For routerClientsCache, you can add a creator of connections so you don't 
need to do the check yourself.
* It looks like we can have cases where we cannot get the client to the Router 
(MountTableRefreshService#168), we should return which Routers we have updated 
succesfully and which ones are failed.
* Typo {{logRestult}}
* Why do we need to specifically use ZK in the unit test? I would leave it to 
whatever the default one is. We could potentially reuse some of the other unit 
tests that have a full subcluster with admin. Ideally some of the mount table 
related ones.
* Use Time.monotonicNow() instead of System.currentTimeMillis().

> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-branch-2.001.patch, 
> HDFS-13443-branch-2.002.patch, HDFS-13443.001.patch, HDFS-13443.002.patch, 
> HDFS-13443.003.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13433) webhdfs requests can be routed incorrectly in federated cluster

2018-04-23 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-13433:
-
Component/s: webhdfs

> webhdfs requests can be routed incorrectly in federated cluster
> ---
>
> Key: HDFS-13433
> URL: https://issues.apache.org/jira/browse/HDFS-13433
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Critical
> Attachments: HDFS-13433.01.patch, HDFS-13433.02.patch, 
> HDFS-13433.03.patch, HDFS-13433.04.patch
>
>
> In the following HA+Federated setup with two nameservices ns1 and ns2:
> # ns1 -> namenodes nn1, nn2
> # ns2 -> namenodes nn3, nn4
> # fs.defaultFS is {{hdfs://ns1}}.
> A webhdfs request issued to nn3/nn4 will be routed to ns1. This is because 
> {{setClientNamenodeAddress}} initializes {{NameNode#clientNamenodeAddress}} 
> using fs.defaultFS before the config is overriden.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >