[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-03 Thread Jinglun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425043#comment-16425043
 ] 

Jinglun commented on HDFS-13388:


Hi [~elgoiri] [~linyiqun], seems good this time. 

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425017#comment-16425017
 ] 

genericqa commented on HDFS-13388:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
26s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13388 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917492/HADOOP-13388.0002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fff92c26e811 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f7a17b0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23771/testReport/ |
| Max. process+thread count | 330 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23771/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> 

[jira] [Commented] (HDFS-13045) RBF: Improve error message returned from subcluster

2018-04-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424967#comment-16424967
 ] 

genericqa commented on HDFS-13045:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m  4s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterRpc |
|   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13045 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917488/HDFS-13045.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3570c1670faf 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2d06d88 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23769/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23769/testReport/ |
| Max. process+thread count | 1029 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 

[jira] [Updated] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-03 Thread Jinglun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-13388:
---
Status: Patch Available  (was: Open)

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-03 Thread Jinglun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-13388:
---
Attachment: HADOOP-13388.0002.patch

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-03 Thread Jinglun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-13388:
---
Status: Open  (was: Patch Available)

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13391) Ozone: Make dependency of internal sub-module scope as provided in maven.

2018-04-03 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-13391:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Ozone: Make dependency of internal sub-module scope as provided in maven.
> -
>
> Key: HDFS-13391
> URL: https://issues.apache.org/jira/browse/HDFS-13391
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-13391-HDFS-7240.000.patch
>
>
> Whenever an internal sub-module is added as a dependency the scope has to be 
> set to {{provided}}.
> If the scope is not mentioned it falls back to default scope which is 
> {{compile}}, this makes the dependency jar (sub-module jar) to be copied to 
> {{share//lib}} directory while packaging. Since we use 
> {{copyifnotexists}} logic, the binary jar of the actual sub-module will not 
> be copied. This will result in the jar being placed in the wrong location 
> inside the distribution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13391) Ozone: Make dependency of internal sub-module scope as provided in maven.

2018-04-03 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424940#comment-16424940
 ] 

Nanda kumar commented on HDFS-13391:


I have committed this to the feature branch.

> Ozone: Make dependency of internal sub-module scope as provided in maven.
> -
>
> Key: HDFS-13391
> URL: https://issues.apache.org/jira/browse/HDFS-13391
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-13391-HDFS-7240.000.patch
>
>
> Whenever an internal sub-module is added as a dependency the scope has to be 
> set to {{provided}}.
> If the scope is not mentioned it falls back to default scope which is 
> {{compile}}, this makes the dependency jar (sub-module jar) to be copied to 
> {{share//lib}} directory while packaging. Since we use 
> {{copyifnotexists}} logic, the binary jar of the actual sub-module will not 
> be copied. This will result in the jar being placed in the wrong location 
> inside the distribution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-04-03 Thread Dennis Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424935#comment-16424935
 ] 

Dennis Huo commented on HDFS-13056:
---

Good observation, [~xiaochen], that seems to save a lot of duplicate code by 
refactoring FileChecksumHelper. Applied your remaining suggestions in 
[^HDFS-13056.014.patch] and also mirrored in the github pull request under 
[this 
commit|https://github.com/apache/hadoop/pull/344/commits/f110ac1ceb57fb4974e3b691fbc53fb5f863885f]
 in case it's more convenient for anyone to view.

> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Assignee: Dennis Huo
>Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.002.patch, HDFS-13056-branch-2.8.003.patch, 
> HDFS-13056-branch-2.8.004.patch, HDFS-13056-branch-2.8.005.patch, 
> HDFS-13056-branch-2.8.poc1.patch, HDFS-13056.001.patch, HDFS-13056.002.patch, 
> HDFS-13056.003.patch, HDFS-13056.003.patch, HDFS-13056.004.patch, 
> HDFS-13056.005.patch, HDFS-13056.006.patch, HDFS-13056.007.patch, 
> HDFS-13056.008.patch, HDFS-13056.009.patch, HDFS-13056.010.patch, 
> HDFS-13056.011.patch, HDFS-13056.012.patch, HDFS-13056.013.patch, 
> HDFS-13056.014.patch, Reference_only_zhen_PPOC_hadoop2.6.X.diff, 
> hdfs-file-composite-crc32-v1.pdf, hdfs-file-composite-crc32-v2.pdf, 
> hdfs-file-composite-crc32-v3.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-04-03 Thread Dennis Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Huo updated HDFS-13056:
--
Attachment: HDFS-13056.014.patch

> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Assignee: Dennis Huo
>Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.002.patch, HDFS-13056-branch-2.8.003.patch, 
> HDFS-13056-branch-2.8.004.patch, HDFS-13056-branch-2.8.005.patch, 
> HDFS-13056-branch-2.8.poc1.patch, HDFS-13056.001.patch, HDFS-13056.002.patch, 
> HDFS-13056.003.patch, HDFS-13056.003.patch, HDFS-13056.004.patch, 
> HDFS-13056.005.patch, HDFS-13056.006.patch, HDFS-13056.007.patch, 
> HDFS-13056.008.patch, HDFS-13056.009.patch, HDFS-13056.010.patch, 
> HDFS-13056.011.patch, HDFS-13056.012.patch, HDFS-13056.013.patch, 
> HDFS-13056.014.patch, Reference_only_zhen_PPOC_hadoop2.6.X.diff, 
> hdfs-file-composite-crc32-v1.pdf, hdfs-file-composite-crc32-v2.pdf, 
> hdfs-file-composite-crc32-v3.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13391) Ozone: Make dependency of internal sub-module scope as provided in maven.

2018-04-03 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424933#comment-16424933
 ] 

Nanda kumar commented on HDFS-13391:


Thanks [~msingh] for the review, I will commit this shortly.

> Ozone: Make dependency of internal sub-module scope as provided in maven.
> -
>
> Key: HDFS-13391
> URL: https://issues.apache.org/jira/browse/HDFS-13391
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-13391-HDFS-7240.000.patch
>
>
> Whenever an internal sub-module is added as a dependency the scope has to be 
> set to {{provided}}.
> If the scope is not mentioned it falls back to default scope which is 
> {{compile}}, this makes the dependency jar (sub-module jar) to be copied to 
> {{share//lib}} directory while packaging. Since we use 
> {{copyifnotexists}} logic, the binary jar of the actual sub-module will not 
> be copied. This will result in the jar being placed in the wrong location 
> inside the distribution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13237) [Documentation] RBF: Mount points across multiple subclusters

2018-04-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424932#comment-16424932
 ] 

genericqa commented on HDFS-13237:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
38m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13237 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917479/HDFS-13237.000.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 64aa7e275186 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2d06d88 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 311 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23768/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [Documentation] RBF: Mount points across multiple subclusters
> -
>
> Key: HDFS-13237
> URL: https://issues.apache.org/jira/browse/HDFS-13237
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13237.000.patch
>
>
> Document the feature to spread mount points across multiple subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13393) Improve OOM logging

2018-04-03 Thread lqjack (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424931#comment-16424931
 ] 

lqjack edited comment on HDFS-13393 at 4/4/18 3:12 AM:
---

I have created the Patch for this task , how can I push the code ? thanks 


was (Author: lqjack):
catch the OOM 

> Improve OOM logging
> ---
>
> Key: HDFS-13393
> URL: https://issues.apache.org/jira/browse/HDFS-13393
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover, datanode
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> It is not uncommon to find "java.lang.OutOfMemoryError: unable to create new 
> native thread" errors in a HDFS cluster. Most often this happens when 
> DataNode creating DataXceiver threads, or when balancer creates threads for 
> moving blocks around.
> In most of cases, the "OOM" is a symptom of number of threads reaching system 
> limit, rather than actually running out of memory, and the current logging of 
> this message is usually misleading (suggesting this is due to insufficient 
> memory)
> How about capturing the OOM, and if it is due to "unable to create new native 
> thread", print some more helpful message like "bump your ulimit" or "take a 
> jstack of the process"?
> Even better, surface this error to make it more visible. It usually takes a 
> while for an in-depth investigation after users notice some job fails, by the 
> time the evidences may already been gone (like jstack output).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13393) Improve OOM logging

2018-04-03 Thread lqjack (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424931#comment-16424931
 ] 

lqjack commented on HDFS-13393:
---

catch the OOM 

> Improve OOM logging
> ---
>
> Key: HDFS-13393
> URL: https://issues.apache.org/jira/browse/HDFS-13393
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover, datanode
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> It is not uncommon to find "java.lang.OutOfMemoryError: unable to create new 
> native thread" errors in a HDFS cluster. Most often this happens when 
> DataNode creating DataXceiver threads, or when balancer creates threads for 
> moving blocks around.
> In most of cases, the "OOM" is a symptom of number of threads reaching system 
> limit, rather than actually running out of memory, and the current logging of 
> this message is usually misleading (suggesting this is due to insufficient 
> memory)
> How about capturing the OOM, and if it is due to "unable to create new native 
> thread", print some more helpful message like "bump your ulimit" or "take a 
> jstack of the process"?
> Even better, surface this error to make it more visible. It usually takes a 
> while for an in-depth investigation after users notice some job fails, by the 
> time the evidences may already been gone (like jstack output).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-03 Thread Jinglun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424929#comment-16424929
 ] 

Jinglun commented on HDFS-13388:


Thanks [~elgoiri] [~linyiqun] for suggestions. I would do the fix and attach a 
new patch.

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HADOOP-13388.0001.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424920#comment-16424920
 ] 

Íñigo Goiri commented on HDFS-13388:


[~LiJinglun], regarding the static import, instead of doing a full 
{{Assert.assertTrue()}} and {{Mockito.when()}}.
At the beginning you can do an import like:
{code}
import static org.junit.Assert.assertTrue;
{code}

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HADOOP-13388.0001.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13045) RBF: Improve error message returned from subcluster

2018-04-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424916#comment-16424916
 ] 

Íñigo Goiri commented on HDFS-13045:


[^HDFS-13045.000.patch] has the unit test that should capture the exception.
It should fail until this issue gets fixed.

> RBF: Improve error message returned from subcluster
> ---
>
> Key: HDFS-13045
> URL: https://issues.apache.org/jira/browse/HDFS-13045
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HDFS-13045.000.patch
>
>
> Currently, Router directly returns exception response from subcluster to 
> client, which may not have the correct error message, especially when the 
> error message containing a path.
> One example, we have a mount path "/a/b" mapped to subclusterA's "/c/d". If 
> user1 does a chown operation on "/a/b", and he doesn't have corresponding 
> privilege, currently the error msg looks like "Permission denied. user=user1 
> is not the owner of inode=/c/d", which may confuse user. Would be better to 
> reverse the path back to original mount path.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13045) RBF: Improve error message returned from subcluster

2018-04-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13045:
---
Attachment: HDFS-13045.000.patch

> RBF: Improve error message returned from subcluster
> ---
>
> Key: HDFS-13045
> URL: https://issues.apache.org/jira/browse/HDFS-13045
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HDFS-13045.000.patch
>
>
> Currently, Router directly returns exception response from subcluster to 
> client, which may not have the correct error message, especially when the 
> error message containing a path.
> One example, we have a mount path "/a/b" mapped to subclusterA's "/c/d". If 
> user1 does a chown operation on "/a/b", and he doesn't have corresponding 
> privilege, currently the error msg looks like "Permission denied. user=user1 
> is not the owner of inode=/c/d", which may confuse user. Would be better to 
> reverse the path back to original mount path.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13045) RBF: Improve error message returned from subcluster

2018-04-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13045:
---
Status: Patch Available  (was: Open)

> RBF: Improve error message returned from subcluster
> ---
>
> Key: HDFS-13045
> URL: https://issues.apache.org/jira/browse/HDFS-13045
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HDFS-13045.000.patch
>
>
> Currently, Router directly returns exception response from subcluster to 
> client, which may not have the correct error message, especially when the 
> error message containing a path.
> One example, we have a mount path "/a/b" mapped to subclusterA's "/c/d". If 
> user1 does a chown operation on "/a/b", and he doesn't have corresponding 
> privilege, currently the error msg looks like "Permission denied. user=user1 
> is not the owner of inode=/c/d", which may confuse user. Would be better to 
> reverse the path back to original mount path.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-03 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424907#comment-16424907
 ] 

Yiqun Lin commented on HDFS-13388:
--

Hi [~LiJinglun],
{quote}I'm new to Hadoop Community and not knowning how to do the checkstyle fix
{quote}
The checkstyle warnings are reported in QA report. You can click into 
[here|https://builds.apache.org/job/PreCommit-HDFS-Build/23760/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt],
 do the fix, and then attach the updated patch.

Hope this will help you, :).

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HADOOP-13388.0001.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13045) RBF: Improve error message returned from subcluster

2018-04-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13045:
---
Summary: RBF: Improve error message returned from subcluster  (was: RBF: 
Improve error message returned from subcsluter)

> RBF: Improve error message returned from subcluster
> ---
>
> Key: HDFS-13045
> URL: https://issues.apache.org/jira/browse/HDFS-13045
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
>
> Currently, Router directly returns exception response from subcluster to 
> client, which may not have the correct error message, especially when the 
> error message containing a path.
> One example, we have a mount path "/a/b" mapped to subclusterA's "/c/d". If 
> user1 does a chown operation on "/a/b", and he doesn't have corresponding 
> privilege, currently the error msg looks like "Permission denied. user=user1 
> is not the owner of inode=/c/d", which may confuse user. Would be better to 
> reverse the path back to original mount path.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13237) [Documentation] RBF: Mount points across multiple subclusters

2018-04-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13237:
---
Status: Patch Available  (was: Open)

> [Documentation] RBF: Mount points across multiple subclusters
> -
>
> Key: HDFS-13237
> URL: https://issues.apache.org/jira/browse/HDFS-13237
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13237.000.patch
>
>
> Document the feature to spread mount points across multiple subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13237) [Documentation] RBF: Mount points across multiple subclusters

2018-04-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424877#comment-16424877
 ] 

Íñigo Goiri commented on HDFS-13237:


I attached  [^HDFS-13237.000.patch] with a first try for this documentation.
We can go deeper and try to add more examples.
Feedback appreciated.

When writing it, I thought we could do an approach similar to SPACE based on 
the load of the subclusters.

> [Documentation] RBF: Mount points across multiple subclusters
> -
>
> Key: HDFS-13237
> URL: https://issues.apache.org/jira/browse/HDFS-13237
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13237.000.patch
>
>
> Document the feature to spread mount points across multiple subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13237) [Documentation] RBF: Mount points across multiple subclusters

2018-04-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13237:
---
Attachment: HDFS-13237.000.patch

> [Documentation] RBF: Mount points across multiple subclusters
> -
>
> Key: HDFS-13237
> URL: https://issues.apache.org/jira/browse/HDFS-13237
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13237.000.patch
>
>
> Document the feature to spread mount points across multiple subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-03 Thread Jinglun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424875#comment-16424875
 ] 

Jinglun commented on HDFS-13388:


Thanks [~elgoiri] for review. I'm new to Hadoop Community and not knowning how 
to do the checkstyle fix. Please do the checkstyle fix and static imports, that 
will be very helpful.

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HADOOP-13388.0001.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13376) TLS support error in Native Build of hadoop-hdfs-native-client

2018-04-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424854#comment-16424854
 ] 

genericqa commented on HDFS-13376:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13376 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917474/HDFS-13376.002.patch |
| Optional Tests |  asflicense  |
| uname | Linux c80fc942c523 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2d06d88 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 302 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23767/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TLS support error in Native Build of hadoop-hdfs-native-client
> --
>
> Key: HDFS-13376
> URL: https://issues.apache.org/jira/browse/HDFS-13376
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation, native
>Affects Versions: 3.1.0
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDFS-13376.001.patch, HDFS-13376.002.patch
>
>
> mvn --projects hadoop-hdfs-project/hadoop-hdfs-native-client clean package 
> -Pdist,native -DskipTests -Dtar
> {noformat}
> [exec] CMake Error at main/native/libhdfspp/CMakeLists.txt:64 (message):
>  [exec]   FATAL ERROR: The required feature thread_local storage is not 
> supported by
>  [exec]   your compiler.  Known compilers that support this feature: GCC, 
> Visual
>  [exec]   Studio, Clang (community version), Clang (version for iOS 9 and 
> later).
>  [exec]
>  [exec]
>  [exec] -- Performing Test THREAD_LOCAL_SUPPORTED - Failed
>  [exec] -- Configuring incomplete, errors occurred!
> {noformat}
> My environment:
> Linux: Red Hat 4.4.7-3
> cmake: 3.8.2
> java: 1.8.0_131
> gcc: 4.4.7
> maven: 3.5.0
> Seems this is because the low version of gcc, will report after confirming 
> it. 
> Maybe the {{BUILDING.txt}} needs update to explain the supported lowest gcc 
> version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13376) TLS support error in Native Build of hadoop-hdfs-native-client

2018-04-03 Thread LiXin Ge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424841#comment-16424841
 ] 

LiXin Ge commented on HDFS-13376:
-

[~James C] Thanks for the explication, I think I get the point now :)
{quote}Let me know if you want to extend this and I can check that out too.
{quote}
Yes, willing to make it more helpful. I have updated the patch to contain more 
information now.

> TLS support error in Native Build of hadoop-hdfs-native-client
> --
>
> Key: HDFS-13376
> URL: https://issues.apache.org/jira/browse/HDFS-13376
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation, native
>Affects Versions: 3.1.0
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDFS-13376.001.patch, HDFS-13376.002.patch
>
>
> mvn --projects hadoop-hdfs-project/hadoop-hdfs-native-client clean package 
> -Pdist,native -DskipTests -Dtar
> {noformat}
> [exec] CMake Error at main/native/libhdfspp/CMakeLists.txt:64 (message):
>  [exec]   FATAL ERROR: The required feature thread_local storage is not 
> supported by
>  [exec]   your compiler.  Known compilers that support this feature: GCC, 
> Visual
>  [exec]   Studio, Clang (community version), Clang (version for iOS 9 and 
> later).
>  [exec]
>  [exec]
>  [exec] -- Performing Test THREAD_LOCAL_SUPPORTED - Failed
>  [exec] -- Configuring incomplete, errors occurred!
> {noformat}
> My environment:
> Linux: Red Hat 4.4.7-3
> cmake: 3.8.2
> java: 1.8.0_131
> gcc: 4.4.7
> maven: 3.5.0
> Seems this is because the low version of gcc, will report after confirming 
> it. 
> Maybe the {{BUILDING.txt}} needs update to explain the supported lowest gcc 
> version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13376) TLS support error in Native Build of hadoop-hdfs-native-client

2018-04-03 Thread LiXin Ge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDFS-13376:

Attachment: HDFS-13376.002.patch

> TLS support error in Native Build of hadoop-hdfs-native-client
> --
>
> Key: HDFS-13376
> URL: https://issues.apache.org/jira/browse/HDFS-13376
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation, native
>Affects Versions: 3.1.0
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDFS-13376.001.patch, HDFS-13376.002.patch
>
>
> mvn --projects hadoop-hdfs-project/hadoop-hdfs-native-client clean package 
> -Pdist,native -DskipTests -Dtar
> {noformat}
> [exec] CMake Error at main/native/libhdfspp/CMakeLists.txt:64 (message):
>  [exec]   FATAL ERROR: The required feature thread_local storage is not 
> supported by
>  [exec]   your compiler.  Known compilers that support this feature: GCC, 
> Visual
>  [exec]   Studio, Clang (community version), Clang (version for iOS 9 and 
> later).
>  [exec]
>  [exec]
>  [exec] -- Performing Test THREAD_LOCAL_SUPPORTED - Failed
>  [exec] -- Configuring incomplete, errors occurred!
> {noformat}
> My environment:
> Linux: Red Hat 4.4.7-3
> cmake: 3.8.2
> java: 1.8.0_131
> gcc: 4.4.7
> maven: 3.5.0
> Seems this is because the low version of gcc, will report after confirming 
> it. 
> Maybe the {{BUILDING.txt}} needs update to explain the supported lowest gcc 
> version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13376) TLS support error in Native Build of hadoop-hdfs-native-client

2018-04-03 Thread LiXin Ge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDFS-13376:

Status: Patch Available  (was: Open)

> TLS support error in Native Build of hadoop-hdfs-native-client
> --
>
> Key: HDFS-13376
> URL: https://issues.apache.org/jira/browse/HDFS-13376
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation, native
>Affects Versions: 3.1.0
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDFS-13376.001.patch, HDFS-13376.002.patch
>
>
> mvn --projects hadoop-hdfs-project/hadoop-hdfs-native-client clean package 
> -Pdist,native -DskipTests -Dtar
> {noformat}
> [exec] CMake Error at main/native/libhdfspp/CMakeLists.txt:64 (message):
>  [exec]   FATAL ERROR: The required feature thread_local storage is not 
> supported by
>  [exec]   your compiler.  Known compilers that support this feature: GCC, 
> Visual
>  [exec]   Studio, Clang (community version), Clang (version for iOS 9 and 
> later).
>  [exec]
>  [exec]
>  [exec] -- Performing Test THREAD_LOCAL_SUPPORTED - Failed
>  [exec] -- Configuring incomplete, errors occurred!
> {noformat}
> My environment:
> Linux: Red Hat 4.4.7-3
> cmake: 3.8.2
> java: 1.8.0_131
> gcc: 4.4.7
> maven: 3.5.0
> Seems this is because the low version of gcc, will report after confirming 
> it. 
> Maybe the {{BUILDING.txt}} needs update to explain the supported lowest gcc 
> version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13376) TLS support error in Native Build of hadoop-hdfs-native-client

2018-04-03 Thread LiXin Ge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDFS-13376:

Status: Open  (was: Patch Available)

> TLS support error in Native Build of hadoop-hdfs-native-client
> --
>
> Key: HDFS-13376
> URL: https://issues.apache.org/jira/browse/HDFS-13376
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation, native
>Affects Versions: 3.1.0
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDFS-13376.001.patch
>
>
> mvn --projects hadoop-hdfs-project/hadoop-hdfs-native-client clean package 
> -Pdist,native -DskipTests -Dtar
> {noformat}
> [exec] CMake Error at main/native/libhdfspp/CMakeLists.txt:64 (message):
>  [exec]   FATAL ERROR: The required feature thread_local storage is not 
> supported by
>  [exec]   your compiler.  Known compilers that support this feature: GCC, 
> Visual
>  [exec]   Studio, Clang (community version), Clang (version for iOS 9 and 
> later).
>  [exec]
>  [exec]
>  [exec] -- Performing Test THREAD_LOCAL_SUPPORTED - Failed
>  [exec] -- Configuring incomplete, errors occurred!
> {noformat}
> My environment:
> Linux: Red Hat 4.4.7-3
> cmake: 3.8.2
> java: 1.8.0_131
> gcc: 4.4.7
> maven: 3.5.0
> Seems this is because the low version of gcc, will report after confirming 
> it. 
> Maybe the {{BUILDING.txt}} needs update to explain the supported lowest gcc 
> version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13331) Add lastSeenStateId to RpcRequestHeader.

2018-04-03 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424829#comment-16424829
 ] 

Erik Krogen commented on HDFS-13331:


Cool, looks great to me, thanks [~zero45]! Last nit - I see in v004 that 
{{getLastSeenStateId}} in {{ClientCGIContext}} is no longer public as it was in 
v003, is that intentional?

> Add lastSeenStateId to RpcRequestHeader.
> 
>
> Key: HDFS-13331
> URL: https://issues.apache.org/jira/browse/HDFS-13331
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13331-HDFS-12943.002.patch, 
> HDFS-13331-HDFS-12943.003..patch, HDFS-13331-HDFS-12943.004.patch, 
> HDFS-13331.trunk.001.patch, HDFS_13331.trunk.000.patch
>
>
> HDFS-12977 added a stateId into the RpcResponseHeader which is returned by 
> NameNode and stored by DFSClient.
> This JIRA is to followup on that work and have the DFSClient send their 
> stored "lastSeenStateId" in the RpcRequestHeader so that ObserverNodes can 
> then compare with their own and act accordingly.
> This JIRA work focuses on just the part of making DFSClient send their state 
> through RpcRequestHeader.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13386) RBF: Wrong date information in list file(-ls) result

2018-04-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13386:
---
Summary: RBF: Wrong date information in list file(-ls) result  (was: RBF: 
wrong date information in list file(-ls) result)

> RBF: Wrong date information in list file(-ls) result
> 
>
> Key: HDFS-13386
> URL: https://issues.apache.org/jira/browse/HDFS-13386
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Minor
> Attachments: HDFS-13386.000.patch, image-2018-04-03-11-59-51-623.png
>
>
> # hdfs dfs -ls 
> !image-2018-04-03-11-59-51-623.png!
> this is happening because getMountPointDates is not implemented 
> {code:java}
> private Map getMountPointDates(String path) {
> Map ret = new TreeMap<>();
> // TODO add when we have a Mount Table
> return ret;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13386) RBF: wrong date information in list file(-ls) result

2018-04-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424827#comment-16424827
 ] 

Íñigo Goiri commented on HDFS-13386:


Thanks [~dibyendu_hadoop], please go ahead.

> RBF: wrong date information in list file(-ls) result
> 
>
> Key: HDFS-13386
> URL: https://issues.apache.org/jira/browse/HDFS-13386
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Minor
> Attachments: HDFS-13386.000.patch, image-2018-04-03-11-59-51-623.png
>
>
> # hdfs dfs -ls 
> !image-2018-04-03-11-59-51-623.png!
> this is happening because getMountPointDates is not implemented 
> {code:java}
> private Map getMountPointDates(String path) {
> Map ret = new TreeMap<>();
> // TODO add when we have a Mount Table
> return ret;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13386) RBF: wrong date information in list file(-ls) result

2018-04-03 Thread Dibyendu Karmakar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424796#comment-16424796
 ] 

Dibyendu Karmakar edited comment on HDFS-13386 at 4/4/18 12:13 AM:
---

Hi [~elgoiri],

If you are not working on this, I would like to add a patch with unit tests.


was (Author: dibyendu_hadoop):
Hi [~elgoiri],

If you are not working on this I would like to add a patch with unit tests.

> RBF: wrong date information in list file(-ls) result
> 
>
> Key: HDFS-13386
> URL: https://issues.apache.org/jira/browse/HDFS-13386
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Minor
> Attachments: HDFS-13386.000.patch, image-2018-04-03-11-59-51-623.png
>
>
> # hdfs dfs -ls 
> !image-2018-04-03-11-59-51-623.png!
> this is happening because getMountPointDates is not implemented 
> {code:java}
> private Map getMountPointDates(String path) {
> Map ret = new TreeMap<>();
> // TODO add when we have a Mount Table
> return ret;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13386) RBF: wrong date information in list file(-ls) result

2018-04-03 Thread Dibyendu Karmakar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424796#comment-16424796
 ] 

Dibyendu Karmakar commented on HDFS-13386:
--

Hi [~elgoiri],

If you are not working on this I would like to add a patch with unit tests.

> RBF: wrong date information in list file(-ls) result
> 
>
> Key: HDFS-13386
> URL: https://issues.apache.org/jira/browse/HDFS-13386
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Minor
> Attachments: HDFS-13386.000.patch, image-2018-04-03-11-59-51-623.png
>
>
> # hdfs dfs -ls 
> !image-2018-04-03-11-59-51-623.png!
> this is happening because getMountPointDates is not implemented 
> {code:java}
> private Map getMountPointDates(String path) {
> Map ret = new TreeMap<>();
> // TODO add when we have a Mount Table
> return ret;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13329) Add/ Update disk space counters for trash (trash used, disk remaining etc.)

2018-04-03 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424760#comment-16424760
 ] 

Hanisha Koneru edited comment on HDFS-13329 at 4/3/18 11:41 PM:


Thanks for working on this, [~bharatviswa]. 

Looks good overall. I have a some comments:
1. Can you add Javadoc and License to the 
{color:#3b73af}{{{color}CachingGetSpaceUsedWithExclude}} and {{DUWithExclude}}.

2. In {{DUWithExclude}}, we are calculating the {{du}} for both the path and 
the excludedPath and then subtracting the later from the former. We end up 
calculating the space used by replica trash twice this way.
{code:java}
setUsed((Long.parseLong(tokens[0]) * 1024) - (Long.parseLong(tokens1[0]) * 
1024));{code}
we could instead utilized the {{--exclude}} option of {{du}} command.
 Also, can we add the exclude option to {{DU.java}} itself instead of another 
class? I am not sure how complicated that would get though. I am ok with this 
approach too.

3. Can we rename {{TestDU#testDUWithSubtract}}, to {{testDUWithExclude}} to be 
consistent with the naming.

4. In {{TestDU#testDUWithSubtract}}, the last assert statement has a typo.
{code:java}
assertTrue("invalid-disk-size", duSize >= writtenSize && writtenSize <= (duSize 
+ slack));
{code}
Should have been
{code:java}
 du <= (writtenSize + slack) {code}

5. In {{DatanodeInfo#getDatanodeReport()}}, can we report the new disk counters 
after the {{DFSRemaining%}} counter.
 
6. In {{DFSConfigKeys}},
{code:java}
  public static final String DFS_DATANODE_REPLICA_TRASH_PERCENT =  
"dfs.datanode.replica.trash.keep.alive.interval";
{code}
The value for the config parameter is mistyped.

7. In {{BlockPoolSlice}},
 ** {{In loadDfsUsed(), variable }}{{replicaTrashUsed}} is not used.
 ** In {{loadReplicaTrashUsed}}, if we are using separate 
{{CachingGetSpaceUsed}} objects for {{dfsUsage}} and {{replicaTrashUsage}}, we 
should have separate Cache files too.
 
8. {{FsVolumeImpl#replicaTrashLimit}} variable can be final.

9. In {{FsVolumeImpl#onMetaFileDeletion()}}, we should not decrement the number 
of blocks count in the BP.

10. {{DFSAdmin}}, can we let the DN figure out whether replicaTrash is enabled 
or not and send the report accordingly?


was (Author: hanishakoneru):
Thanks for working on this, [~bharatviswa]. 

Looks good overall. I have a some comments:
 # Can you add Javadoc and License to the 
{color:#3b73af}{{{color}CachingGetSpaceUsedWithExclude}} and {{DUWithExclude}}.
 # In {{DUWithExclude}}, we are calculating the {{du}} for both the path and 
the excludedPath and then subtracting the later from the former. We end up 
calculating the space used by replica trash twice this way.
{code:java}
setUsed((Long.parseLong(tokens[0]) * 1024) - (Long.parseLong(tokens1[0]) * 
1024));{code}
we could instead utilized the {{--exclude}} option of {{du}} command.
 Also, can we add the exclude option to {{DU.java}} itself instead of another 
class? I am not sure how complicated that would get though. I am ok with this 
approach too.

 # Can we rename {{TestDU#testDUWithSubtract}}, to {{testDUWithExclude}} to be 
consistent with the naming.
 # In {{TestDU#testDUWithSubtract}}, the last assert statement has a typo.
{code:java}
assertTrue("invalid-disk-size", duSize >= writtenSize && writtenSize <= (duSize 
+ slack));
{code}
Should have been
{code:java}
 du <= (writtenSize + slack) {code}

 # In {{DatanodeInfo#getDatanodeReport()}}, can we report the new disk counters 
after the {{DFSRemaining%}} counter.
 # In {{DFSConfigKeys}},
{code:java}
  public static final String DFS_DATANODE_REPLICA_TRASH_PERCENT =  
"dfs.datanode.replica.trash.keep.alive.interval";
{code}
The value for the config parameter is mistyped.

 # In {{BlockPoolSlice}},
 ** {{In loadDfsUsed(), variable }}{{replicaTrashUsed}} is not used.
 ** In {{loadReplicaTrashUsed}}, if we are using separate 
{{CachingGetSpaceUsed}} objects for {{dfsUsage}} and {{replicaTrashUsage}}, we 
should have separate Cache files too.
 # {{FsVolumeImpl#replicaTrashLimit}} variable can be final.
 # In {{FsVolumeImpl#onMetaFileDeletion()}}, we should not decrement the number 
of blocks count in the BP.
 # {{DFSAdmin}}, can we let the DN figure out whether replicaTrash is enabled 
or not and send the report accordingly?

> Add/ Update disk space counters for trash (trash used, disk remaining etc.) 
> 
>
> Key: HDFS-13329
> URL: https://issues.apache.org/jira/browse/HDFS-13329
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13329-HDFS-12996.01.patch, 
> HDFS-13329-HDFS-12996.02.patch
>
>
> Add 3 more counters required for datanode replica trash.
>  # diskAvailable
>  # 

[jira] [Comment Edited] (HDFS-13329) Add/ Update disk space counters for trash (trash used, disk remaining etc.)

2018-04-03 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424760#comment-16424760
 ] 

Hanisha Koneru edited comment on HDFS-13329 at 4/3/18 11:39 PM:


Thanks for working on this, [~bharatviswa]. 

Looks good overall. I have a some comments:
 # Can you add Javadoc and License to the 
{color:#3b73af}{{{color}CachingGetSpaceUsedWithExclude}} and {{DUWithExclude}}.
 # In {{DUWithExclude}}, we are calculating the {{du}} for both the path and 
the excludedPath and then subtracting the later from the former. We end up 
calculating the space used by replica trash twice this way.
{code:java}
setUsed((Long.parseLong(tokens[0]) * 1024) - (Long.parseLong(tokens1[0]) * 
1024));{code}
we could instead utilized the {{--exclude}} option of {{du}} command.
 Also, can we add the exclude option to {{DU.java}} itself instead of another 
class? I am not sure how complicated that would get though. I am ok with this 
approach too.

 # Can we rename {{TestDU#testDUWithSubtract}}, to {{testDUWithExclude}} to be 
consistent with the naming.
 # In {{TestDU#testDUWithSubtract}}, the last assert statement has a typo.
{code:java}
assertTrue("invalid-disk-size", duSize >= writtenSize && writtenSize <= (duSize 
+ slack));
{code}
Should have been
{code:java}
 du <= (writtenSize + slack) {code}

 # In {{DatanodeInfo#getDatanodeReport()}}, can we report the new disk counters 
after the {{DFSRemaining%}} counter.
 # In {{DFSConfigKeys}},
{code:java}
  public static final String DFS_DATANODE_REPLICA_TRASH_PERCENT =  
"dfs.datanode.replica.trash.keep.alive.interval";
{code}
The value for the config parameter is mistyped.

 # In {{BlockPoolSlice}},
 ** {{In loadDfsUsed(), variable }}{{replicaTrashUsed}} is not used.
 ** In {{loadReplicaTrashUsed}}, if we are using separate 
{{CachingGetSpaceUsed}} objects for {{dfsUsage}} and {{replicaTrashUsage}}, we 
should have separate Cache files too.
 # {{FsVolumeImpl#replicaTrashLimit}} variable can be final.
 # In {{FsVolumeImpl#onMetaFileDeletion()}}, we should not decrement the number 
of blocks count in the BP.
 # {{DFSAdmin}}, can we let the DN figure out whether replicaTrash is enabled 
or not and send the report accordingly?


was (Author: hanishakoneru):
Thanks for working on this, [~bharatviswa]. 

Looks good overall. I have a few comments:
 # Can you add Javadoc and License to the 
{color:#3b73af}{{{color}CachingGetSpaceUsedWithExclude}} and {{DUWithExclude}}.
 # In {{DUWithExclude}}, we are calculating the {{du}} for both the path and 
the excludedPath and then subtracting the later from the former. We end up 
calculating the space used by replica trash twice this way.
{code:java}
setUsed((Long.parseLong(tokens[0]) * 1024) - (Long.parseLong(tokens1[0]) * 
1024));{code}
we could instead utilized the {{--exclude}} option of {{du}} command.
 Also, can we add the exclude option to {{DU.java}} itself instead of another 
class? I am not sure how complicated that would get though. I am ok with this 
approach too.

 # Can we rename {{TestDU#testDUWithSubtract}}, to {{testDUWithExclude}} to be 
consistent with the naming.
 # In {{TestDU#testDUWithSubtract}}, the last assert statement has a typo.
{code:java}
assertTrue("invalid-disk-size", duSize >= writtenSize && writtenSize <= (duSize 
+ slack));
{code}
Should have been
{code:java}
 du <= (writtenSize + slack) {code}

 # In {{DatanodeInfo#getDatanodeReport()}}, can we report the new disk counters 
after the {{DFSRemaining%}} counter.
 # In {{DFSConfigKeys}},
{code:java}
  public static final String DFS_DATANODE_REPLICA_TRASH_PERCENT =  
"dfs.datanode.replica.trash.keep.alive.interval";
{code}
The value for the config parameter is mistyped.

 # In {{BlockPoolSlice}},
 ** {{In loadDfsUsed(), variable }}{{replicaTrashUsed}} is not used.
 ** In {{loadReplicaTrashUsed}}, if we are using separate 
{{CachingGetSpaceUsed}} objects for {{dfsUsage}} and {{replicaTrashUsage}}, we 
should have separate Cache files too.
 # {{FsVolumeImpl#replicaTrashLimit}} variable can be final.
 # In {{FsVolumeImpl#onMetaFileDeletion()}}, we should not decrement the number 
of blocks count in the BP.
 # {{DFSAdmin}}, can we let the DN figure out whether replicaTrash is enabled 
or not and send the report accordingly?

> Add/ Update disk space counters for trash (trash used, disk remaining etc.) 
> 
>
> Key: HDFS-13329
> URL: https://issues.apache.org/jira/browse/HDFS-13329
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13329-HDFS-12996.01.patch, 
> HDFS-13329-HDFS-12996.02.patch
>
>
> Add 3 more counters required for datanode replica trash.
>  # diskAvailable
>  # 

[jira] [Comment Edited] (HDFS-13329) Add/ Update disk space counters for trash (trash used, disk remaining etc.)

2018-04-03 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424760#comment-16424760
 ] 

Hanisha Koneru edited comment on HDFS-13329 at 4/3/18 11:39 PM:


Thanks for working on this, [~bharatviswa]. 

Looks good overall. I have a some comments:
 # Can you add Javadoc and License to the 
{color:#3b73af}{{{color}CachingGetSpaceUsedWithExclude}} and {{DUWithExclude}}.
 # In {{DUWithExclude}}, we are calculating the {{du}} for both the path and 
the excludedPath and then subtracting the later from the former. We end up 
calculating the space used by replica trash twice this way.
{code:java}
setUsed((Long.parseLong(tokens[0]) * 1024) - (Long.parseLong(tokens1[0]) * 
1024));{code}
we could instead utilized the {{--exclude}} option of {{du}} command.
 Also, can we add the exclude option to {{DU.java}} itself instead of another 
class? I am not sure how complicated that would get though. I am ok with this 
approach too.

 # Can we rename {{TestDU#testDUWithSubtract}}, to {{testDUWithExclude}} to be 
consistent with the naming.
 # In {{TestDU#testDUWithSubtract}}, the last assert statement has a typo.
{code:java}
assertTrue("invalid-disk-size", duSize >= writtenSize && writtenSize <= (duSize 
+ slack));
{code}
Should have been
{code:java}
 du <= (writtenSize + slack) {code}

 # In {{DatanodeInfo#getDatanodeReport()}}, can we report the new disk counters 
after the {{DFSRemaining%}} counter.
 # In {{DFSConfigKeys}},
{code:java}
  public static final String DFS_DATANODE_REPLICA_TRASH_PERCENT =  
"dfs.datanode.replica.trash.keep.alive.interval";
{code}
The value for the config parameter is mistyped.

 # In {{BlockPoolSlice}},
 ** {{In loadDfsUsed(), variable }}{{replicaTrashUsed}} is not used.
 ** In {{loadReplicaTrashUsed}}, if we are using separate 
{{CachingGetSpaceUsed}} objects for {{dfsUsage}} and {{replicaTrashUsage}}, we 
should have separate Cache files too.
 # {{FsVolumeImpl#replicaTrashLimit}} variable can be final.
 # In {{FsVolumeImpl#onMetaFileDeletion()}}, we should not decrement the number 
of blocks count in the BP.
 # {{DFSAdmin}}, can we let the DN figure out whether replicaTrash is enabled 
or not and send the report accordingly?


was (Author: hanishakoneru):
Thanks for working on this, [~bharatviswa]. 

Looks good overall. I have a some comments:
 # Can you add Javadoc and License to the 
{color:#3b73af}{{{color}CachingGetSpaceUsedWithExclude}} and {{DUWithExclude}}.
 # In {{DUWithExclude}}, we are calculating the {{du}} for both the path and 
the excludedPath and then subtracting the later from the former. We end up 
calculating the space used by replica trash twice this way.
{code:java}
setUsed((Long.parseLong(tokens[0]) * 1024) - (Long.parseLong(tokens1[0]) * 
1024));{code}
we could instead utilized the {{--exclude}} option of {{du}} command.
 Also, can we add the exclude option to {{DU.java}} itself instead of another 
class? I am not sure how complicated that would get though. I am ok with this 
approach too.

 # Can we rename {{TestDU#testDUWithSubtract}}, to {{testDUWithExclude}} to be 
consistent with the naming.
 # In {{TestDU#testDUWithSubtract}}, the last assert statement has a typo.
{code:java}
assertTrue("invalid-disk-size", duSize >= writtenSize && writtenSize <= (duSize 
+ slack));
{code}
Should have been
{code:java}
 du <= (writtenSize + slack) {code}

 # In {{DatanodeInfo#getDatanodeReport()}}, can we report the new disk counters 
after the {{DFSRemaining%}} counter.
 # In {{DFSConfigKeys}},
{code:java}
  public static final String DFS_DATANODE_REPLICA_TRASH_PERCENT =  
"dfs.datanode.replica.trash.keep.alive.interval";
{code}
The value for the config parameter is mistyped.

 # In {{BlockPoolSlice}},
 ** {{In loadDfsUsed(), variable }}{{replicaTrashUsed}} is not used.
 ** In {{loadReplicaTrashUsed}}, if we are using separate 
{{CachingGetSpaceUsed}} objects for {{dfsUsage}} and {{replicaTrashUsage}}, we 
should have separate Cache files too.
 # {{FsVolumeImpl#replicaTrashLimit}} variable can be final.
 # In {{FsVolumeImpl#onMetaFileDeletion()}}, we should not decrement the number 
of blocks count in the BP.
 # {{DFSAdmin}}, can we let the DN figure out whether replicaTrash is enabled 
or not and send the report accordingly?

> Add/ Update disk space counters for trash (trash used, disk remaining etc.) 
> 
>
> Key: HDFS-13329
> URL: https://issues.apache.org/jira/browse/HDFS-13329
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13329-HDFS-12996.01.patch, 
> HDFS-13329-HDFS-12996.02.patch
>
>
> Add 3 more counters required for datanode replica trash.
>  # diskAvailable
>  # 

[jira] [Commented] (HDFS-13329) Add/ Update disk space counters for trash (trash used, disk remaining etc.)

2018-04-03 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424760#comment-16424760
 ] 

Hanisha Koneru commented on HDFS-13329:
---

Thanks for working on this, [~bharatviswa]. 

Looks good overall. I have a few comments:
 # Can you add Javadoc and License to the 
{color:#3b73af}{{{color}CachingGetSpaceUsedWithExclude}} and {{DUWithExclude}}.
 # In {{DUWithExclude}}, we are calculating the {{du}} for both the path and 
the excludedPath and then subtracting the later from the former. We end up 
calculating the space used by replica trash twice this way.
{code:java}
setUsed((Long.parseLong(tokens[0]) * 1024) - (Long.parseLong(tokens1[0]) * 
1024));{code}
we could instead utilized the {{--exclude}} option of {{du}} command.
 Also, can we add the exclude option to {{DU.java}} itself instead of another 
class? I am not sure how complicated that would get though. I am ok with this 
approach too.

 # Can we rename {{TestDU#testDUWithSubtract}}, to {{testDUWithExclude}} to be 
consistent with the naming.
 # In {{TestDU#testDUWithSubtract}}, the last assert statement has a typo.
{code:java}
assertTrue("invalid-disk-size", duSize >= writtenSize && writtenSize <= (duSize 
+ slack));
{code}
Should have been
{code:java}
 du <= (writtenSize + slack) {code}

 # In {{DatanodeInfo#getDatanodeReport()}}, can we report the new disk counters 
after the {{DFSRemaining%}} counter.
 # In {{DFSConfigKeys}},
{code:java}
  public static final String DFS_DATANODE_REPLICA_TRASH_PERCENT =  
"dfs.datanode.replica.trash.keep.alive.interval";
{code}
The value for the config parameter is mistyped.

 # In {{BlockPoolSlice}},
 ** {{In loadDfsUsed(), variable }}{{replicaTrashUsed}} is not used.
 ** In {{loadReplicaTrashUsed}}, if we are using separate 
{{CachingGetSpaceUsed}} objects for {{dfsUsage}} and {{replicaTrashUsage}}, we 
should have separate Cache files too.
 # {{FsVolumeImpl#replicaTrashLimit}} variable can be final.
 # In {{FsVolumeImpl#onMetaFileDeletion()}}, we should not decrement the number 
of blocks count in the BP.
 # {{DFSAdmin}}, can we let the DN figure out whether replicaTrash is enabled 
or not and send the report accordingly?

> Add/ Update disk space counters for trash (trash used, disk remaining etc.) 
> 
>
> Key: HDFS-13329
> URL: https://issues.apache.org/jira/browse/HDFS-13329
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13329-HDFS-12996.01.patch, 
> HDFS-13329-HDFS-12996.02.patch
>
>
> Add 3 more counters required for datanode replica trash.
>  # diskAvailable
>  # replicaTrashUsed
>  # replicaTrashRemaining
> For more info on these counters, refer design document uploaded in HDFS-12996



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9492) RoundRobinVolumeChoosingPolicy

2018-04-03 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-9492:
--
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

> RoundRobinVolumeChoosingPolicy
> --
>
> Key: HDFS-9492
> URL: https://issues.apache.org/jira/browse/HDFS-9492
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: BELUGA BEHR
>Priority: Trivial
> Attachments: RoundRobinVolumeChoosingPolicy.HDFS-9492.patch, 
> RoundRobinVolumeChoosingPolicy.patch
>
>
> This is some general clean-up for: RoundRobinVolumeChoosingPolicy
> I have also updated and expanded the unit tests a bit.
> There is one error message being generated that I changed.  I felt the 
> previous Exception message was not that helpful and therefore it was possible 
> to trim it down. If the exception message must be enhanced, the entire list 
> of "volumes" should be included.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13393) Improve OOM logging

2018-04-03 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13393:
---
Description: 
It is not uncommon to find "java.lang.OutOfMemoryError: unable to create new 
native thread" errors in a HDFS cluster. Most often this happens when DataNode 
creating DataXceiver threads, or when balancer creates threads for moving 
blocks around.

In most of cases, the "OOM" is a symptom of number of threads reaching system 
limit, rather than actually running out of memory, and the current logging of 
this message is usually misleading (suggesting this is due to insufficient 
memory)

How about capturing the OOM, and if it is due to "unable to create new native 
thread", print some more helpful message like "bump your ulimit" or "take a 
jstack of the process"?

Even better, surface this error to make it more visible. It usually takes a 
while for an in-depth investigation after users notice some job fails, by the 
time the evidences may already been gone (like jstack output).

  was:
It is not uncommon to find "java.lang.OutOfMemoryError: unable to create new 
native thread" errors in a HDFS cluster. Most often this happens when DataNode 
creating DataXceiver threads, or when balancer creates threads for moving 
blocks around.

In most of cases, the "OOM" is a symptom of number of threads reaching system 
limit, rather than actually running out of memory, and the current logging of 
this message is usually misleading (suggesting this is due to insufficient 
memory)

How about capturing the OOM, and if it is due to "unable to create new native 
thread", print some more helpful message like "bump your ulimit" or "take a 
jstack of the process"?


> Improve OOM logging
> ---
>
> Key: HDFS-13393
> URL: https://issues.apache.org/jira/browse/HDFS-13393
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover, datanode
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> It is not uncommon to find "java.lang.OutOfMemoryError: unable to create new 
> native thread" errors in a HDFS cluster. Most often this happens when 
> DataNode creating DataXceiver threads, or when balancer creates threads for 
> moving blocks around.
> In most of cases, the "OOM" is a symptom of number of threads reaching system 
> limit, rather than actually running out of memory, and the current logging of 
> this message is usually misleading (suggesting this is due to insufficient 
> memory)
> How about capturing the OOM, and if it is due to "unable to create new native 
> thread", print some more helpful message like "bump your ulimit" or "take a 
> jstack of the process"?
> Even better, surface this error to make it more visible. It usually takes a 
> while for an in-depth investigation after users notice some job fails, by the 
> time the evidences may already been gone (like jstack output).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13393) Improve OOM logging

2018-04-03 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13393:
---
Description: 
It is not uncommon to find "java.lang.OutOfMemoryError: unable to create new 
native thread" errors in a HDFS cluster. Most often this happens when DataNode 
creating DataXceiver threads, or when balancer creates threads for moving 
blocks around.

In most of cases, the "OOM" is a symptom of number of threads reaching system 
limit, rather than actually running out of memory, and the current logging of 
this message is usually misleading (suggesting this is due to insufficient 
memory)

How about capturing the OOM, and if it is due to "unable to create new native 
thread", print some more helpful message like "bump your ulimit" or "take a 
jstack of the process"?

  was:
It is not uncommon to find "java.lang.OutOfMemoryError: unable to create new 
native thread" error in a HDFS cluster. Most often this happens when DataNode 
creating DataXceiver threads, or when balancer creates threads for moving 
blocks around.

In most of cases, the "OOM" is a symptom of number of threads reaching system 
limit, rather than actually running out of memory.

How about capturing the OOM, and if it is due to "unable to create new native 
thread", print some more helpful message like "bump your ulimit" or "take a 
jstack of the process"?


> Improve OOM logging
> ---
>
> Key: HDFS-13393
> URL: https://issues.apache.org/jira/browse/HDFS-13393
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover, datanode
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> It is not uncommon to find "java.lang.OutOfMemoryError: unable to create new 
> native thread" errors in a HDFS cluster. Most often this happens when 
> DataNode creating DataXceiver threads, or when balancer creates threads for 
> moving blocks around.
> In most of cases, the "OOM" is a symptom of number of threads reaching system 
> limit, rather than actually running out of memory, and the current logging of 
> this message is usually misleading (suggesting this is due to insufficient 
> memory)
> How about capturing the OOM, and if it is due to "unable to create new native 
> thread", print some more helpful message like "bump your ulimit" or "take a 
> jstack of the process"?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13393) Improve OOM logging

2018-04-03 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-13393:
--

 Summary: Improve OOM logging
 Key: HDFS-13393
 URL: https://issues.apache.org/jira/browse/HDFS-13393
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer  mover, datanode
Reporter: Wei-Chiu Chuang


It is not uncommon to find "java.lang.OutOfMemoryError: unable to create new 
native thread" error in a HDFS cluster. Most often this happens when DataNode 
creating DataXceiver threads, or when balancer creates threads for moving 
blocks around.

In most of cases, the "OOM" is a symptom of number of threads reaching system 
limit, rather than actually running out of memory.

How about capturing the OOM, and if it is due to "unable to create new native 
thread", print some more helpful message like "bump your ulimit" or "take a 
jstack of the process"?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13337) Backport HDFS-4275 to branch-2.9

2018-04-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13337:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.2
   2.10.0
   Status: Resolved  (was: Patch Available)

Thanks [~surmountian].
Committed to branch-2 and branch-2.9.

> Backport HDFS-4275 to branch-2.9
> 
>
> Key: HDFS-13337
> URL: https://issues.apache.org/jira/browse/HDFS-13337
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Íñigo Goiri
>Assignee: Xiao Liang
>Priority: Minor
> Fix For: 2.10.0, 2.9.2
>
> Attachments: HDFS-13337-branch-2.000.patch
>
>
> Multiple HDFS test suites fail on Windows during initialization of 
> MiniDFSCluster due to "Could not fully delete" the name testing data 
> directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13337) Backport HDFS-4275 to branch-2.9

2018-04-03 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424582#comment-16424582
 ] 

Xiao Liang commented on HDFS-13337:
---

Yes, I tested the patch locally and the test cases were passed most of the 
time, there's some random failure sometimes which should be caused by other 
reason.

> Backport HDFS-4275 to branch-2.9
> 
>
> Key: HDFS-13337
> URL: https://issues.apache.org/jira/browse/HDFS-13337
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Íñigo Goiri
>Assignee: Xiao Liang
>Priority: Minor
> Attachments: HDFS-13337-branch-2.000.patch
>
>
> Multiple HDFS test suites fail on Windows during initialization of 
> MiniDFSCluster due to "Could not fully delete" the name testing data 
> directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10419) Building HDFS on top of new storage layer (HDSL)

2018-04-03 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424574#comment-16424574
 ] 

Anu Engineer commented on HDFS-10419:
-

[~sanjay.radia] Thank you, we will rename HDSL to HDDS. [~shv], [~drankye] I am 
going to assume that HDDS is the way to go. Thanks for the comments and 
feedback.

> Building HDFS on top of new storage layer (HDSL)
> 
>
> Key: HDFS-10419
> URL: https://issues.apache.org/jira/browse/HDFS-10419
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Major
> Attachments: Evolving NN using new block-container layer.pdf
>
>
> In HDFS-7240, Ozone defines storage containers to store both the data and the 
> metadata. The storage container layer provides an object storage interface 
> and aims to manage data/metadata in a distributed manner. More details about 
> storage containers can be found in the design doc in HDFS-7240.
> HDFS can adopt the storage containers to store and manage blocks. The general 
> idea is:
> # Each block can be treated as an object and the block ID is the object's key.
> # Blocks will still be stored in DataNodes but as objects in storage 
> containers.
> # The block management work can be separated out of the NameNode and will be 
> handled by the storage container layer in a more distributed way. The 
> NameNode will only manage the namespace (i.e., files and directories).
> # For each file, the NameNode only needs to record a list of block IDs which 
> are used as keys to obtain real data from storage containers.
> # A new DFSClient implementation talks to both NameNode and the storage 
> container layer to read/write.
> HDFS, especially the NameNode, can get much better scalability from this 
> design. Currently the NameNode's heaviest workload comes from the block 
> management, which includes maintaining the block-DataNode mapping, receiving 
> full/incremental block reports, tracking block states (under/over/miss 
> replicated), and joining every writing pipeline protocol to guarantee the 
> data consistency. These work bring high memory footprint and make NameNode 
> suffer from GC. HDFS-5477 already proposes to convert BlockManager as a 
> service. If we can build HDFS on top of the storage container layer, we not 
> only separate out the BlockManager from the NameNode, but also replace it 
> with a new distributed management scheme.
> The storage container work is currently in progress in HDFS-7240, and the 
> work proposed here is still in an experimental/exploring stage. We can do 
> this experiment in a feature branch so that people with interests can be 
> involved.
> A design doc will be uploaded later explaining more details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10419) Building HDFS on top of new storage layer (HDSL)

2018-04-03 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424559#comment-16424559
 ] 

Sanjay Radia commented on HDFS-10419:
-

While I prefer HDSS, I would gladly let Anu who has done the bulk of heavy 
lifting in this project to have the final say on the name (unless his name 
choice was very horrible which isn't). The most passionate debates in any 
projects are always the name :)

+1  HDDS -  Hadoop Distributed Data Store.

> Building HDFS on top of new storage layer (HDSL)
> 
>
> Key: HDFS-10419
> URL: https://issues.apache.org/jira/browse/HDFS-10419
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Major
> Attachments: Evolving NN using new block-container layer.pdf
>
>
> In HDFS-7240, Ozone defines storage containers to store both the data and the 
> metadata. The storage container layer provides an object storage interface 
> and aims to manage data/metadata in a distributed manner. More details about 
> storage containers can be found in the design doc in HDFS-7240.
> HDFS can adopt the storage containers to store and manage blocks. The general 
> idea is:
> # Each block can be treated as an object and the block ID is the object's key.
> # Blocks will still be stored in DataNodes but as objects in storage 
> containers.
> # The block management work can be separated out of the NameNode and will be 
> handled by the storage container layer in a more distributed way. The 
> NameNode will only manage the namespace (i.e., files and directories).
> # For each file, the NameNode only needs to record a list of block IDs which 
> are used as keys to obtain real data from storage containers.
> # A new DFSClient implementation talks to both NameNode and the storage 
> container layer to read/write.
> HDFS, especially the NameNode, can get much better scalability from this 
> design. Currently the NameNode's heaviest workload comes from the block 
> management, which includes maintaining the block-DataNode mapping, receiving 
> full/incremental block reports, tracking block states (under/over/miss 
> replicated), and joining every writing pipeline protocol to guarantee the 
> data consistency. These work bring high memory footprint and make NameNode 
> suffer from GC. HDFS-5477 already proposes to convert BlockManager as a 
> service. If we can build HDFS on top of the storage container layer, we not 
> only separate out the BlockManager from the NameNode, but also replace it 
> with a new distributed management scheme.
> The storage container work is currently in progress in HDFS-7240, and the 
> work proposed here is still in an experimental/exploring stage. We can do 
> this experiment in a feature branch so that people with interests can be 
> involved.
> A design doc will be uploaded later explaining more details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13384) RBF: Improve timeout RPC call mechanism

2018-04-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424524#comment-16424524
 ] 

Íñigo Goiri commented on HDFS-13384:


The error in TestRouterWebHDFSContractCreate is tracked in HDFS-13353.

> RBF: Improve timeout RPC call mechanism
> ---
>
> Key: HDFS-13384
> URL: https://issues.apache.org/jira/browse/HDFS-13384
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13384.000.patch
>
>
> When issuing RPC requests to subclusters, we have a time out mechanism 
> introduced in HDFS-12273. We need to improve this is handled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13384) RBF: Improve timeout RPC call mechanism

2018-04-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424432#comment-16424432
 ] 

genericqa commented on HDFS-13384:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m  6s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13384 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917297/HDFS-13384.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a67f3cff1c73 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5a174f8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23766/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23766/testReport/ |
| Max. process+thread count | 1527 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23766/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Commented] (HDFS-13391) Ozone: Make dependency of internal sub-module scope as provided in maven.

2018-04-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424408#comment-16424408
 ] 

genericqa commented on HDFS-13391:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m  
6s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 19m 
29s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
21s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-cblock in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 20m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 2s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
12s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
20s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m  8s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
46s{color} | {color:red} The patch generated 70 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}222m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.conf.TestCommonConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f |
| JIRA Issue | HDFS-13391 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917375/HDFS-13391-HDFS-7240.000.patch
 |
| Optional Tests |  asflicense  shellcheck  shelldocs  compile  javac  javadoc  
mvninstall  mvnsite  unit  shadedclient  xml  |
| uname | Linux 7ff9a645e3f4 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-7240 / ac77b18 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| shellcheck | v0.4.6 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23762/artifact/out/patch-mvninstall-hadoop-cblock.txt
 |
| unit | 

[jira] [Commented] (HDFS-12749) DN may not send block report to NN after NN restart

2018-04-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424402#comment-16424402
 ] 

genericqa commented on HDFS-12749:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 16s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 22s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}174m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-12749 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917385/HDFS-12749-trunk.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5cca398ec0d7 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5a174f8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23763/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23763/testReport/ |
| Max. process+thread count | 3466 (vs. ulimit of 

[jira] [Commented] (HDFS-13365) RBF: Adding trace support

2018-04-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424399#comment-16424399
 ] 

genericqa commented on HDFS-13365:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
45s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13365 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917401/HDFS-13365.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 95021ff0218b 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5a174f8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23765/testReport/ |
| Max. process+thread count | 1019 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23765/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Adding trace support
> -
>
> Key: HDFS-13365
> URL: https://issues.apache.org/jira/browse/HDFS-13365
>

[jira] [Created] (HDFS-13392) Incorrect length in Truncate CloseEvents

2018-04-03 Thread David Tucker (JIRA)
David Tucker created HDFS-13392:
---

 Summary: Incorrect length in Truncate CloseEvents
 Key: HDFS-13392
 URL: https://issues.apache.org/jira/browse/HDFS-13392
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: David Tucker


Under stress (multiple clients truncating separate non-empty files in half 
simultaneously), the CloseEvent triggered by a Truncate RPC may contain an 
incorrect length. We're able to reproduce this reliably ~20% of the time (our 
tests are somewhat randomized/fuzzy).

For example, given this Truncate request:
{noformat}
Request:
  truncate {
src: 
"/chai_test65c9a2a0-1188-439d-92e2-96a81c14a266-\357\254\200\357\272\217\357\255\217\343\203\276\324\262\342\204\200\342\213\251/chai_testbd968366-0016-4462-ac12-e48e0487bebd-\340\270\215\334\200\311\226\342\202\242\343\202\236\340\256\205\357\272\217/chai_testb5b155e8-b331-4f67-bdfa-546f82128b5d-\312\254\340\272\201\343\202\242\306\220\340\244\205\342\202\242\343\204\270a\334\240\337\213\340\244\240\343\200\243\342\202\243\343\203\276\313\225\346\206\250"
newLength: 2003855
clientName: 
"\341\264\275\327\220\343\203\250\333\263\343\220\205\357\254\227\340\270\201\340\245\251\306\225\341\203\265\334\220\342\202\243\343\204\206!A\343\206\215\357\254\201\340\273\223\347\224\260"
  }
  Block Size: 1048576B
  Old length: 4007711B (3.82205104828 blocks)
  Truncation: 2003856B (1.91102600098 blocks)
  New length: 2003855B (1.9110250473 blocks)
Response:
  result: true
{noformat}
We see these INotify events:
{noformat}
TruncateEvent {
path: 
/chai_test65c9a2a0-1188-439d-92e2-96a81c14a266-ffﺏﭏヾԲ℀⋩/chai_testbd968366-0016-4462-ac12-e48e0487bebd-ญ܀ɖ₢ゞஅﺏ/chai_testb5b155e8-b331-4f67-bdfa-546f82128b5d-ʬກアƐअ₢ㄸaܠߋठ〣₣ヾ˕憨
length: 2003855
timestamp: 1522716573143
}
{noformat}
{noformat}
CloseEvent {
path: 
/chai_test65c9a2a0-1188-439d-92e2-96a81c14a266-ffﺏﭏヾԲ℀⋩/chai_testbd968366-0016-4462-ac12-e48e0487bebd-ญ܀ɖ₢ゞஅﺏ/chai_testb5b155e8-b331-4f67-bdfa-546f82128b5d-ʬກアƐअ₢ㄸaܠߋठ〣₣ヾ˕憨
length: -2
timestamp: 1522716575723
}
{noformat}
{{-2}} is not the only number that shows up as the length, 
{{9223372036854775807}} is common too. These are detected by Python 2 tests, 
and the latter is {{sys.maxint}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13350) Negative legacy block ID will confuse Erasure Coding to be considered as striped block

2018-04-03 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424344#comment-16424344
 ] 

Xiao Chen edited comment on HDFS-13350 at 4/3/18 5:45 PM:
--

bq.  another JIRA code style consistency?
Ok.

+1 pending fixing pre-commit issues. Thanks Eddy!


was (Author: xiaochen):
bq.  another JIRA code style consistency?
Ok.

+1 pending fixing pre-commit issues.

> Negative legacy block ID will confuse Erasure Coding to be considered as 
> striped block
> --
>
> Key: HDFS-13350
> URL: https://issues.apache.org/jira/browse/HDFS-13350
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Major
> Attachments: HDFS-13350.00.patch, HDFS-13350.01.patch
>
>
> HDFS-4645 has changed HDFS block ID from randomly generated to sequential 
> positive IDs.  And later on, HDFS EC was built on the assumption that normal 
> 3x replica block IDs are positive, so EC re-use negative IDs as striped 
> blocks.
> However, there are legacy block IDs can be negative in the system, we should 
> not use hardcode method to check whether a block is stripe or not:
> {code}
>   public static boolean isStripedBlockID(long id) {
> return BlockType.fromBlockId(id) == STRIPED;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13350) Negative legacy block ID will confuse Erasure Coding to be considered as striped block

2018-04-03 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424344#comment-16424344
 ] 

Xiao Chen commented on HDFS-13350:
--

bq.  another JIRA code style consistency?
Ok.

+1 pending fixing pre-commit issues.

> Negative legacy block ID will confuse Erasure Coding to be considered as 
> striped block
> --
>
> Key: HDFS-13350
> URL: https://issues.apache.org/jira/browse/HDFS-13350
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Major
> Attachments: HDFS-13350.00.patch, HDFS-13350.01.patch
>
>
> HDFS-4645 has changed HDFS block ID from randomly generated to sequential 
> positive IDs.  And later on, HDFS EC was built on the assumption that normal 
> 3x replica block IDs are positive, so EC re-use negative IDs as striped 
> blocks.
> However, there are legacy block IDs can be negative in the system, we should 
> not use hardcode method to check whether a block is stripe or not:
> {code}
>   public static boolean isStripedBlockID(long id) {
> return BlockType.fromBlockId(id) == STRIPED;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13384) RBF: Improve timeout RPC call mechanism

2018-04-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13384:
---
Status: Patch Available  (was: Open)

> RBF: Improve timeout RPC call mechanism
> ---
>
> Key: HDFS-13384
> URL: https://issues.apache.org/jira/browse/HDFS-13384
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13384.000.patch
>
>
> When issuing RPC requests to subclusters, we have a time out mechanism 
> introduced in HDFS-12273. We need to improve this is handled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13365) RBF: Adding trace support

2018-04-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424299#comment-16424299
 ] 

Íñigo Goiri commented on HDFS-13365:


Updated after HDFS-13364.

> RBF: Adding trace support
> -
>
> Key: HDFS-13365
> URL: https://issues.apache.org/jira/browse/HDFS-13365
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13365.000.patch, HDFS-13365.001.patch, 
> HDFS-13365.003.patch, HDFS-13365.004.patch, HDFS-13365.005.patch
>
>
> We should support HTrace and add spans.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13365) RBF: Adding trace support

2018-04-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13365:
---
Attachment: HDFS-13365.005.patch

> RBF: Adding trace support
> -
>
> Key: HDFS-13365
> URL: https://issues.apache.org/jira/browse/HDFS-13365
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13365.000.patch, HDFS-13365.001.patch, 
> HDFS-13365.003.patch, HDFS-13365.004.patch, HDFS-13365.005.patch
>
>
> We should support HTrace and add spans.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13297) Add config validation util

2018-04-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424285#comment-16424285
 ] 

genericqa commented on HDFS-13297:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m  
0s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
 3s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
49s{color} | {color:red} hadoop-hdsl/common in HDFS-7240 has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-hdsl/common: The patch generated 18 new + 
0 unchanged - 0 fixed = 18 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
7s{color} | {color:green} common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
24s{color} | {color:red} The patch generated 45 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f |
| JIRA Issue | HDFS-13297 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916453/HDFS-13297-HDFS-7240.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 7f5551858dec 3.13.0-141-generic #190-Ubuntu SMP Fri Jan 19 
12:52:38 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-7240 / ac77b18 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23764/artifact/out/branch-findbugs-hadoop-hdsl_common-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23764/artifact/out/diff-checkstyle-hadoop-hdsl_common.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-13376) TLS support error in Native Build of hadoop-hdfs-native-client

2018-04-03 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424278#comment-16424278
 ] 

James Clampffer commented on HDFS-13376:


[~GeLiXin] I just meant that the warning below could be a little more specific. 
 Instead of GCC it could say "GCC 4.8.1 or later".

{code}
"FATAL ERROR: The required feature thread_local storage is not supported by 
your compiler.  Known compilers that support this feature: GCC, Visual Studio, 
Clang (community version), \ Clang (version for iOS 9 and later).")
{code}

 

That said I think the patch you have now looks good, +1.  Let me know if you 
want to extend this and I can check that out too.

> TLS support error in Native Build of hadoop-hdfs-native-client
> --
>
> Key: HDFS-13376
> URL: https://issues.apache.org/jira/browse/HDFS-13376
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation, native
>Affects Versions: 3.1.0
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDFS-13376.001.patch
>
>
> mvn --projects hadoop-hdfs-project/hadoop-hdfs-native-client clean package 
> -Pdist,native -DskipTests -Dtar
> {noformat}
> [exec] CMake Error at main/native/libhdfspp/CMakeLists.txt:64 (message):
>  [exec]   FATAL ERROR: The required feature thread_local storage is not 
> supported by
>  [exec]   your compiler.  Known compilers that support this feature: GCC, 
> Visual
>  [exec]   Studio, Clang (community version), Clang (version for iOS 9 and 
> later).
>  [exec]
>  [exec]
>  [exec] -- Performing Test THREAD_LOCAL_SUPPORTED - Failed
>  [exec] -- Configuring incomplete, errors occurred!
> {noformat}
> My environment:
> Linux: Red Hat 4.4.7-3
> cmake: 3.8.2
> java: 1.8.0_131
> gcc: 4.4.7
> maven: 3.5.0
> Seems this is because the low version of gcc, will report after confirming 
> it. 
> Maybe the {{BUILDING.txt}} needs update to explain the supported lowest gcc 
> version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13364) RBF: Support NamenodeProtocol in the Router

2018-04-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424256#comment-16424256
 ] 

Íñigo Goiri commented on HDFS-13364:


Thanks [~linyiqun] for the initial commit and the review.
Committed to branch-2 and branch-2.9.

> RBF: Support NamenodeProtocol in the Router
> ---
>
> Key: HDFS-13364
> URL: https://issues.apache.org/jira/browse/HDFS-13364
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.0.2, 3.2.0, 2.9.2
>
> Attachments: HDFS-13364-branch-2.001.patch, HDFS-13364.000.patch, 
> HDFS-13364.001.patch, HDFS-13364.002.patch, HDFS-13364.003.patch, 
> HDFS-13364.004.patch, HDFS-13364.005.patch
>
>
> The Router should support the NamenodeProtocol to get blocks, versions, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13364) RBF: Support NamenodeProtocol in the Router

2018-04-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13364:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> RBF: Support NamenodeProtocol in the Router
> ---
>
> Key: HDFS-13364
> URL: https://issues.apache.org/jira/browse/HDFS-13364
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.0.2, 3.2.0, 2.9.2
>
> Attachments: HDFS-13364-branch-2.001.patch, HDFS-13364.000.patch, 
> HDFS-13364.001.patch, HDFS-13364.002.patch, HDFS-13364.003.patch, 
> HDFS-13364.004.patch, HDFS-13364.005.patch
>
>
> The Router should support the NamenodeProtocol to get blocks, versions, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13364) RBF: Support NamenodeProtocol in the Router

2018-04-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13364:
---
Fix Version/s: 2.9.2
   2.10.0

> RBF: Support NamenodeProtocol in the Router
> ---
>
> Key: HDFS-13364
> URL: https://issues.apache.org/jira/browse/HDFS-13364
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.0.2, 3.2.0, 2.9.2
>
> Attachments: HDFS-13364-branch-2.001.patch, HDFS-13364.000.patch, 
> HDFS-13364.001.patch, HDFS-13364.002.patch, HDFS-13364.003.patch, 
> HDFS-13364.004.patch, HDFS-13364.005.patch
>
>
> The Router should support the NamenodeProtocol to get blocks, versions, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13364) RBF: Support NamenodeProtocol in the Router

2018-04-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424249#comment-16424249
 ] 

Íñigo Goiri commented on HDFS-13364:


[^HDFS-13364-branch-2.001.patch] builds fine on top of branch-2 and the unit 
tests succeed (other than HDFS-13311).
Committing to branch-2.

> RBF: Support NamenodeProtocol in the Router
> ---
>
> Key: HDFS-13364
> URL: https://issues.apache.org/jira/browse/HDFS-13364
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 3.1.0, 3.0.2, 3.2.0
>
> Attachments: HDFS-13364-branch-2.001.patch, HDFS-13364.000.patch, 
> HDFS-13364.001.patch, HDFS-13364.002.patch, HDFS-13364.003.patch, 
> HDFS-13364.004.patch, HDFS-13364.005.patch
>
>
> The Router should support the NamenodeProtocol to get blocks, versions, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424225#comment-16424225
 ] 

Íñigo Goiri commented on HDFS-13388:


Thanks [~LiJinglun] for  [^HADOOP-13388.0001.patch] .
Other than the checkstyle fix, I would do static imports for the Assert and the 
Mockito methods.
Other than that, it looks good.

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HADOOP-13388.0001.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13353) RBF: TestRouterWebHDFSContractCreate failed

2018-04-03 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424214#comment-16424214
 ] 

Takanobu Asanuma commented on HDFS-13353:
-

[~ywskycn] I see. I will keep in mind it.

> RBF: TestRouterWebHDFSContractCreate failed
> ---
>
> Key: HDFS-13353
> URL: https://issues.apache.org/jira/browse/HDFS-13353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13353.1.patch
>
>
> {noformat}
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 21.685 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
> [ERROR] 
> testCreatedFileIsVisibleOnFlush(org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate)
>   Time elapsed: 0.147 s  <<< ERROR!
> java.io.FileNotFoundException: expected path to be visible before file 
> closed: not found 
> webhdfs://0.0.0.0:43796/test/testCreatedFileIsVisibleOnFlush in 
> webhdfs://0.0.0.0:43796/test
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:936)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.assertPathExists(ContractTestUtils.java:914)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertPathExists(AbstractFSContractTestBase.java:294)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testCreatedFileIsVisibleOnFlush(AbstractContractCreateTest.java:254)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.io.FileNotFoundException: File does not exist: 
> /test/testCreatedFileIsVisibleOnFlush
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:549)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$800(WebHdfsFileSystem.java:136)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:877)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:843)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:642)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:680)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:676)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:1074)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:1085)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:930)
>   ... 15 more
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File 
> does not exist: /test/testCreatedFileIsVisibleOnFlush
>   at 

[jira] [Assigned] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HDFS-13388:
--

Assignee: Jinglun

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HADOOP-13388.0001.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13383) Ozone: start-ozone.sh fail to start datanode because of incomplete classpaths

2018-04-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424198#comment-16424198
 ] 

genericqa commented on HDFS-13383:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m  
0s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 4s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
18s{color} | {color:red} common in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
12s{color} | {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
14s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 16s{color} 
| {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f |
| JIRA Issue | HDFS-13383 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917373/HDFS-13383-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 8dc6f6c291eb 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-7240 / ac77b18 |
| maven | version: Apache Maven 3.3.9 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23761/artifact/out/branch-mvnsite-hadoop-ozone_common.txt
 |
| shellcheck | v0.4.6 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23761/artifact/out/patch-mvnsite-hadoop-ozone_common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23761/artifact/out/patch-unit-hadoop-ozone_common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23761/testReport/ |
| Max. process+thread count | 313 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/common U: hadoop-ozone/common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23761/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: start-ozone.sh fail to start datanode because of incomplete classpaths
> -
>
> Key: HDFS-13383
> URL: https://issues.apache.org/jira/browse/HDFS-13383
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13383-HDFS-7240.001.patch, 
> HDFS-13383-HDFS-7240.002.patch
>
>
> start-ozone.sh calls start-dfs.sh to start the NN and DN in a ozone cluster. 
> Starting of datanode fails because of incomplete classpaths as 

[jira] [Commented] (HDFS-13386) RBF: wrong date information in list file(-ls) result

2018-04-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424193#comment-16424193
 ] 

Íñigo Goiri commented on HDFS-13386:


I have some internal code for this.
It doesn't have unit tests and it could use some of the utilities to check the 
parent paths.
Anyway, attached  [^HDFS-13386.000.patch].

> RBF: wrong date information in list file(-ls) result
> 
>
> Key: HDFS-13386
> URL: https://issues.apache.org/jira/browse/HDFS-13386
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Minor
> Attachments: HDFS-13386.000.patch, image-2018-04-03-11-59-51-623.png
>
>
> # hdfs dfs -ls 
> !image-2018-04-03-11-59-51-623.png!
> this is happening because getMountPointDates is not implemented 
> {code:java}
> private Map getMountPointDates(String path) {
> Map ret = new TreeMap<>();
> // TODO add when we have a Mount Table
> return ret;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13386) RBF: wrong date information in list file(-ls) result

2018-04-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13386:
---
Attachment: HDFS-13386.000.patch

> RBF: wrong date information in list file(-ls) result
> 
>
> Key: HDFS-13386
> URL: https://issues.apache.org/jira/browse/HDFS-13386
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Minor
> Attachments: HDFS-13386.000.patch, image-2018-04-03-11-59-51-623.png
>
>
> # hdfs dfs -ls 
> !image-2018-04-03-11-59-51-623.png!
> this is happening because getMountPointDates is not implemented 
> {code:java}
> private Map getMountPointDates(String path) {
> Map ret = new TreeMap<>();
> // TODO add when we have a Mount Table
> return ret;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13297) Add config validation util

2018-04-03 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-13297:
--
Status: Patch Available  (was: Open)

> Add config validation util
> --
>
> Key: HDFS-13297
> URL: https://issues.apache.org/jira/browse/HDFS-13297
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13297-HDFS-7240.000.patch
>
>
> Add a generic util to validate configuration based on TAGS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12749) DN may not send block report to NN after NN restart

2018-04-03 Thread He Xiaoqiao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424184#comment-16424184
 ] 

He Xiaoqiao commented on HDFS-12749:


update patch v4 and fix bug based comments and without testcase.
[~kihwal], I don't find a grace way to test the scenario that NN correctly 
processed the registration, but the DN timed out before receiving the response, 
do you have a good idea?
Thanks again.

> DN may not send block report to NN after NN restart
> ---
>
> Key: HDFS-12749
> URL: https://issues.apache.org/jira/browse/HDFS-12749
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1, 2.8.3, 2.7.5, 3.0.0, 2.9.1
>Reporter: TanYuxin
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-12749-branch-2.7.002.patch, 
> HDFS-12749-trunk.003.patch, HDFS-12749-trunk.004.patch, HDFS-12749.001.patch
>
>
> Now our cluster have thousands of DN, millions of files and blocks. When NN 
> restart, NN's load is very high.
> After NN restart,DN will call BPServiceActor#reRegister method to register. 
> But register RPC will get a IOException since NN is busy dealing with Block 
> Report.  The exception is caught at BPServiceActor#processCommand.
> Next is the caught IOException:
> {code:java}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Error processing 
> datanode Command
> java.io.IOException: Failed on local exception: java.io.IOException: 
> java.net.SocketTimeoutException: 6 millis timeout while waiting for 
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
> local=/DataNode_IP:Port remote=NameNode_Host/IP:Port]; Host Details : local 
> host is: "DataNode_Host/Datanode_IP"; destination host is: 
> "NameNode_Host":Port;
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:773)
> at org.apache.hadoop.ipc.Client.call(Client.java:1474)
> at org.apache.hadoop.ipc.Client.call(Client.java:1407)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy13.registerDatanode(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.registerDatanode(DatanodeProtocolClientSideTranslatorPB.java:126)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.register(BPServiceActor.java:793)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.reRegister(BPServiceActor.java:926)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:604)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:898)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:711)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:864)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The un-catched IOException breaks BPServiceActor#register, and the Block 
> Report can not be sent immediately. 
> {code}
>   /**
>* Register one bp with the corresponding NameNode
>* 
>* The bpDatanode needs to register with the namenode on startup in order
>* 1) to report which storage it is serving now and 
>* 2) to receive a registrationID
>*  
>* issued by the namenode to recognize registered datanodes.
>* 
>* @param nsInfo current NamespaceInfo
>* @see FSNamesystem#registerDatanode(DatanodeRegistration)
>* @throws IOException
>*/
>   void register(NamespaceInfo nsInfo) throws IOException {
> // The handshake() phase loaded the block pool storage
> // off disk - so update the bpRegistration object from that info
> DatanodeRegistration newBpRegistration = bpos.createRegistration();
> LOG.info(this + " beginning handshake with NN");
> while (shouldRun()) {
>   try {
> // Use returned registration from namenode with updated fields
> newBpRegistration = bpNamenode.registerDatanode(newBpRegistration);
> newBpRegistration.setNamespaceInfo(nsInfo);
> bpRegistration = newBpRegistration;
> break;
>   } catch(EOFException e) {  // namenode might have just restarted
> LOG.info("Problem connecting to server: " + nnAddr + " :"
> + e.getLocalizedMessage());
> sleepAndLogInterrupts(1000, "connecting to server");
>   } catch(SocketTimeoutException e) {  // namenode is busy
> LOG.info("Problem connecting to server: " + nnAddr);
> sleepAndLogInterrupts(1000, "connecting to server");
>   }
> }
> 
> LOG.info("Block pool " + this + " successfully registered with NN");
> 

[jira] [Commented] (HDFS-13353) RBF: TestRouterWebHDFSContractCreate failed

2018-04-03 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424173#comment-16424173
 ] 

Wei Yan commented on HDFS-13353:


[~tasanuma0829] From I tried last time, I think 
testCreatedFileIsImmediatelyVisible doesn't work with existing NN WebHDFS.

> RBF: TestRouterWebHDFSContractCreate failed
> ---
>
> Key: HDFS-13353
> URL: https://issues.apache.org/jira/browse/HDFS-13353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13353.1.patch
>
>
> {noformat}
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 21.685 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
> [ERROR] 
> testCreatedFileIsVisibleOnFlush(org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate)
>   Time elapsed: 0.147 s  <<< ERROR!
> java.io.FileNotFoundException: expected path to be visible before file 
> closed: not found 
> webhdfs://0.0.0.0:43796/test/testCreatedFileIsVisibleOnFlush in 
> webhdfs://0.0.0.0:43796/test
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:936)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.assertPathExists(ContractTestUtils.java:914)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertPathExists(AbstractFSContractTestBase.java:294)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testCreatedFileIsVisibleOnFlush(AbstractContractCreateTest.java:254)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.io.FileNotFoundException: File does not exist: 
> /test/testCreatedFileIsVisibleOnFlush
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:549)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$800(WebHdfsFileSystem.java:136)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:877)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:843)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:642)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:680)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:676)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:1074)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:1085)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:930)
>   ... 15 more
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File 
> 

[jira] [Updated] (HDFS-12749) DN may not send block report to NN after NN restart

2018-04-03 Thread He Xiaoqiao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HDFS-12749:
---
Attachment: HDFS-12749-trunk.004.patch

> DN may not send block report to NN after NN restart
> ---
>
> Key: HDFS-12749
> URL: https://issues.apache.org/jira/browse/HDFS-12749
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1, 2.8.3, 2.7.5, 3.0.0, 2.9.1
>Reporter: TanYuxin
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-12749-branch-2.7.002.patch, 
> HDFS-12749-trunk.003.patch, HDFS-12749-trunk.004.patch, HDFS-12749.001.patch
>
>
> Now our cluster have thousands of DN, millions of files and blocks. When NN 
> restart, NN's load is very high.
> After NN restart,DN will call BPServiceActor#reRegister method to register. 
> But register RPC will get a IOException since NN is busy dealing with Block 
> Report.  The exception is caught at BPServiceActor#processCommand.
> Next is the caught IOException:
> {code:java}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Error processing 
> datanode Command
> java.io.IOException: Failed on local exception: java.io.IOException: 
> java.net.SocketTimeoutException: 6 millis timeout while waiting for 
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
> local=/DataNode_IP:Port remote=NameNode_Host/IP:Port]; Host Details : local 
> host is: "DataNode_Host/Datanode_IP"; destination host is: 
> "NameNode_Host":Port;
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:773)
> at org.apache.hadoop.ipc.Client.call(Client.java:1474)
> at org.apache.hadoop.ipc.Client.call(Client.java:1407)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy13.registerDatanode(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.registerDatanode(DatanodeProtocolClientSideTranslatorPB.java:126)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.register(BPServiceActor.java:793)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.reRegister(BPServiceActor.java:926)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:604)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:898)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:711)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:864)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The un-catched IOException breaks BPServiceActor#register, and the Block 
> Report can not be sent immediately. 
> {code}
>   /**
>* Register one bp with the corresponding NameNode
>* 
>* The bpDatanode needs to register with the namenode on startup in order
>* 1) to report which storage it is serving now and 
>* 2) to receive a registrationID
>*  
>* issued by the namenode to recognize registered datanodes.
>* 
>* @param nsInfo current NamespaceInfo
>* @see FSNamesystem#registerDatanode(DatanodeRegistration)
>* @throws IOException
>*/
>   void register(NamespaceInfo nsInfo) throws IOException {
> // The handshake() phase loaded the block pool storage
> // off disk - so update the bpRegistration object from that info
> DatanodeRegistration newBpRegistration = bpos.createRegistration();
> LOG.info(this + " beginning handshake with NN");
> while (shouldRun()) {
>   try {
> // Use returned registration from namenode with updated fields
> newBpRegistration = bpNamenode.registerDatanode(newBpRegistration);
> newBpRegistration.setNamespaceInfo(nsInfo);
> bpRegistration = newBpRegistration;
> break;
>   } catch(EOFException e) {  // namenode might have just restarted
> LOG.info("Problem connecting to server: " + nnAddr + " :"
> + e.getLocalizedMessage());
> sleepAndLogInterrupts(1000, "connecting to server");
>   } catch(SocketTimeoutException e) {  // namenode is busy
> LOG.info("Problem connecting to server: " + nnAddr);
> sleepAndLogInterrupts(1000, "connecting to server");
>   }
> }
> 
> LOG.info("Block pool " + this + " successfully registered with NN");
> bpos.registrationSucceeded(this, bpRegistration);
> // random short delay - helps scatter the BR from all DNs
> scheduler.scheduleBlockReport(dnConf.initialBlockReportDelay);
>   }
> {code}
> But NameNode has processed registerDatanode successfully, so it won't ask DN 
> to 

[jira] [Commented] (HDFS-13353) RBF: TestRouterWebHDFSContractCreate failed

2018-04-03 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424168#comment-16424168
 ] 

Takanobu Asanuma commented on HDFS-13353:
-

Thanks for the review, [~ywskycn], and thank for reporting it, [~elgoiri]! 
Sorry for my late response. I'll work on these issues this week or the next 
week.

[~ywskycn], IIUC "FileSystem.create" doesn't use hflush. So 
{{testCreatedFileIsImmediatelyVisible}} works fine for WebHdfs, doesn't it?

> RBF: TestRouterWebHDFSContractCreate failed
> ---
>
> Key: HDFS-13353
> URL: https://issues.apache.org/jira/browse/HDFS-13353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13353.1.patch
>
>
> {noformat}
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 21.685 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
> [ERROR] 
> testCreatedFileIsVisibleOnFlush(org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate)
>   Time elapsed: 0.147 s  <<< ERROR!
> java.io.FileNotFoundException: expected path to be visible before file 
> closed: not found 
> webhdfs://0.0.0.0:43796/test/testCreatedFileIsVisibleOnFlush in 
> webhdfs://0.0.0.0:43796/test
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:936)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.assertPathExists(ContractTestUtils.java:914)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertPathExists(AbstractFSContractTestBase.java:294)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testCreatedFileIsVisibleOnFlush(AbstractContractCreateTest.java:254)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.io.FileNotFoundException: File does not exist: 
> /test/testCreatedFileIsVisibleOnFlush
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:549)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$800(WebHdfsFileSystem.java:136)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:877)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:843)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:642)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:680)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:676)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:1074)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:1085)
>   at 
> 

[jira] [Commented] (HDFS-13391) Ozone: Make dependency of internal sub-module scope as provided in maven.

2018-04-03 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424153#comment-16424153
 ] 

Mukul Kumar Singh commented on HDFS-13391:
--

Thanks for working on the patch [~nandakumar131]. I applied and tested the 
patch locally.

+1, The patch is working as expected. The patch looks good to me.

I will commit this after jenkins result.

> Ozone: Make dependency of internal sub-module scope as provided in maven.
> -
>
> Key: HDFS-13391
> URL: https://issues.apache.org/jira/browse/HDFS-13391
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-13391-HDFS-7240.000.patch
>
>
> Whenever an internal sub-module is added as a dependency the scope has to be 
> set to {{provided}}.
> If the scope is not mentioned it falls back to default scope which is 
> {{compile}}, this makes the dependency jar (sub-module jar) to be copied to 
> {{share//lib}} directory while packaging. Since we use 
> {{copyifnotexists}} logic, the binary jar of the actual sub-module will not 
> be copied. This will result in the jar being placed in the wrong location 
> inside the distribution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13390) AccessControlException for overwrite but not for delete

2018-04-03 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424130#comment-16424130
 ] 

Rushabh S Shah edited comment on HDFS-13390 at 4/3/18 2:58 PM:
---

{quote}. So why overwriting file will produce AccessControlException and not 
the delete method?
{quote}
[https://github.com/apache/hadoop/blob/branch-2.9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java#L162]
 For {{startFileInt witgh overWrite flage}}, it resolves to all these arguments:
{noformat}
  checkPermission(iip, false, null, null, FsAction.WRITE, null, false)
{noformat}
For {{delete}} path, it resolves to all these arguments:
{noformat}
fsd.checkPermission(iip, false, null, FsAction.WRITE, null, FsAction.ALL, true);
{noformat}
Here are all the {{checkPermission}} parameters.
{noformat}
checkPermission(INodesInPath inodesInPath, boolean doCheckOwner,
  FsAction ancestorAccess, FsAction parentAccess, FsAction access,
  FsAction subAccess, boolean ignoreEmptyDir)
{noformat}
If you notice {{access}} parameter, it is null for {{delete}} and it is 
{{FsAction.WRITE}} for {{startFileInt}}.
 That means it will skip the checking whether file is writable for {{delete}} 
and will check whether file is writable by {{currentUser}}.
 I don't have much context why the behavior is different.
 IMO it should be same (i.e you should be able to overwrite the file if you are 
able to delete it) but you can provide a patch and will let others review the 
patch.
Just FYI, this patch will bring an incompatible change in the way that some 
jobs will not fail when they expect it to fail.


was (Author: shahrs87):
bq. . So why overwriting file will produce AccessControlException and not the 
delete method?
https://github.com/apache/hadoop/blob/branch-2.9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java#L162
For {{startFileInt witgh overWrite flage}}, it resolves to all these arguments:
{noformat}
  checkPermission(iip, false, null, null, FsAction.WRITE, null, false)
{noformat}

For {{delete}} path, it resolves to all these arguments:
{noformat}
fsd.checkPermission(iip, false, null, FsAction.WRITE, null, FsAction.ALL, true);
{noformat}

Here are all the {{checkPermission}} parameters.
{noformat}
checkPermission(INodesInPath inodesInPath, boolean doCheckOwner,
  FsAction ancestorAccess, FsAction parentAccess, FsAction access,
  FsAction subAccess, boolean ignoreEmptyDir)
{noformat}

If you notice {{access}} parameter, it is null for {{delete}} and it is 
{{FsAction.WRITE}} for {{startFileInt}}.
That means it will skip the checking whether file is writable for {{delete}} 
and will check whether file is writable by {{currentUser}}.
I don't have much context why the behavior is different.
IMO it should be same but you can provide a patch and will let others review 
the patch.


> AccessControlException for overwrite but not for delete
> ---
>
> Key: HDFS-13390
> URL: https://issues.apache.org/jira/browse/HDFS-13390
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.9.0
> Environment: *Environment:*
> OS: Centos
> PyArrow Version: 0.8.0
> Python version: 3.6
> HDFS: 2.9
>Reporter: Nasir Ali
>Priority: Minor
>
>  
> *Problem:*
> I have a file (F-1) saved in HDFS with permissions set to "-rw-r--r--" with 
> user "cnali". User "nndugudi" cannot overwrite F-1 (vice versa). hdfs.write 
> will generate following exception:
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=nndugudi, access=WRITE, 
> inode="/cerebralcortex/data/-f81c-44d2-9db8-fea69f468d58/-5087-3d56-ad0e-0b27c3c83182/20171105.gz":cnali:supergroup:-rw-r--r--
> However, user "nndugudi" can delete the file without any problem. So why 
> overwriting file will produce AccessControlException and not the delete 
> method?
> *Sample Code*:
> File: 
> [https://github.com/MD2Korg/CerebralCortex/blob/master/cerebralcortex/core/data_manager/raw/stream_handler.py]
> LOC: 659-705 (write_hdfs_day_file)
>  
> *HDFS Configurations*:
> All configurations are set to default. Security is also disabled as of now.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13390) AccessControlException for overwrite but not for delete

2018-04-03 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424130#comment-16424130
 ] 

Rushabh S Shah commented on HDFS-13390:
---

bq. . So why overwriting file will produce AccessControlException and not the 
delete method?
https://github.com/apache/hadoop/blob/branch-2.9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java#L162
For {{startFileInt witgh overWrite flage}}, it resolves to all these arguments:
{noformat}
  checkPermission(iip, false, null, null, FsAction.WRITE, null, false)
{noformat}

For {{delete}} path, it resolves to all these arguments:
{noformat}
fsd.checkPermission(iip, false, null, FsAction.WRITE, null, FsAction.ALL, true);
{noformat}

Here are all the {{checkPermission}} parameters.
{noformat}
checkPermission(INodesInPath inodesInPath, boolean doCheckOwner,
  FsAction ancestorAccess, FsAction parentAccess, FsAction access,
  FsAction subAccess, boolean ignoreEmptyDir)
{noformat}

If you notice {{access}} parameter, it is null for {{delete}} and it is 
{{FsAction.WRITE}} for {{startFileInt}}.
That means it will skip the checking whether file is writable for {{delete}} 
and will check whether file is writable by {{currentUser}}.
I don't have much context why the behavior is different.
IMO it should be same but you can provide a patch and will let others review 
the patch.


> AccessControlException for overwrite but not for delete
> ---
>
> Key: HDFS-13390
> URL: https://issues.apache.org/jira/browse/HDFS-13390
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.9.0
> Environment: *Environment:*
> OS: Centos
> PyArrow Version: 0.8.0
> Python version: 3.6
> HDFS: 2.9
>Reporter: Nasir Ali
>Priority: Minor
>
>  
> *Problem:*
> I have a file (F-1) saved in HDFS with permissions set to "-rw-r--r--" with 
> user "cnali". User "nndugudi" cannot overwrite F-1 (vice versa). hdfs.write 
> will generate following exception:
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=nndugudi, access=WRITE, 
> inode="/cerebralcortex/data/-f81c-44d2-9db8-fea69f468d58/-5087-3d56-ad0e-0b27c3c83182/20171105.gz":cnali:supergroup:-rw-r--r--
> However, user "nndugudi" can delete the file without any problem. So why 
> overwriting file will produce AccessControlException and not the delete 
> method?
> *Sample Code*:
> File: 
> [https://github.com/MD2Korg/CerebralCortex/blob/master/cerebralcortex/core/data_manager/raw/stream_handler.py]
> LOC: 659-705 (write_hdfs_day_file)
>  
> *HDFS Configurations*:
> All configurations are set to default. Security is also disabled as of now.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13391) Ozone: Make dependency of internal sub-module scope as provided in maven.

2018-04-03 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-13391:
---
Status: Patch Available  (was: Open)

> Ozone: Make dependency of internal sub-module scope as provided in maven.
> -
>
> Key: HDFS-13391
> URL: https://issues.apache.org/jira/browse/HDFS-13391
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-13391-HDFS-7240.000.patch
>
>
> Whenever an internal sub-module is added as a dependency the scope has to be 
> set to {{provided}}.
> If the scope is not mentioned it falls back to default scope which is 
> {{compile}}, this makes the dependency jar (sub-module jar) to be copied to 
> {{share//lib}} directory while packaging. Since we use 
> {{copyifnotexists}} logic, the binary jar of the actual sub-module will not 
> be copied. This will result in the jar being placed in the wrong location 
> inside the distribution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13391) Ozone: Make dependency of internal sub-module scope as provided in maven.

2018-04-03 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-13391:
---
Attachment: HDFS-13391-HDFS-7240.000.patch

> Ozone: Make dependency of internal sub-module scope as provided in maven.
> -
>
> Key: HDFS-13391
> URL: https://issues.apache.org/jira/browse/HDFS-13391
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDFS-13391-HDFS-7240.000.patch
>
>
> Whenever an internal sub-module is added as a dependency the scope has to be 
> set to {{provided}}.
> If the scope is not mentioned it falls back to default scope which is 
> {{compile}}, this makes the dependency jar (sub-module jar) to be copied to 
> {{share//lib}} directory while packaging. Since we use 
> {{copyifnotexists}} logic, the binary jar of the actual sub-module will not 
> be copied. This will result in the jar being placed in the wrong location 
> inside the distribution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13383) Ozone: start-ozone.sh fail to start datanode because of incomplete classpaths

2018-04-03 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424098#comment-16424098
 ] 

Nanda kumar commented on HDFS-13383:


Thanks [~msingh] for working on this.
+1 on patch v002, LGTM. I will commit this shortly.

> Ozone: start-ozone.sh fail to start datanode because of incomplete classpaths
> -
>
> Key: HDFS-13383
> URL: https://issues.apache.org/jira/browse/HDFS-13383
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13383-HDFS-7240.001.patch, 
> HDFS-13383-HDFS-7240.002.patch
>
>
> start-ozone.sh calls start-dfs.sh to start the NN and DN in a ozone cluster. 
> Starting of datanode fails because of incomplete classpaths as datanode is 
> unable to load all the plugins.
> Setting the class path to the following values does resolve the issue:
> {code}
> export 
> HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/ozone/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/hdsl/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/ozone/lib/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/hdsl/lib/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/cblock/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/cblock/lib/*
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13383) Ozone: start-ozone.sh fail to start datanode because of incomplete classpaths

2018-04-03 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-13383:
-
Attachment: HDFS-13383-HDFS-7240.002.patch

> Ozone: start-ozone.sh fail to start datanode because of incomplete classpaths
> -
>
> Key: HDFS-13383
> URL: https://issues.apache.org/jira/browse/HDFS-13383
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13383-HDFS-7240.001.patch, 
> HDFS-13383-HDFS-7240.002.patch
>
>
> start-ozone.sh calls start-dfs.sh to start the NN and DN in a ozone cluster. 
> Starting of datanode fails because of incomplete classpaths as datanode is 
> unable to load all the plugins.
> Setting the class path to the following values does resolve the issue:
> {code}
> export 
> HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/ozone/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/hdsl/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/ozone/lib/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/hdsl/lib/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/cblock/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/cblock/lib/*
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13391) Ozone: Make dependency of internal sub-module scope as provided in maven.

2018-04-03 Thread Nanda kumar (JIRA)
Nanda kumar created HDFS-13391:
--

 Summary: Ozone: Make dependency of internal sub-module scope as 
provided in maven.
 Key: HDFS-13391
 URL: https://issues.apache.org/jira/browse/HDFS-13391
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Nanda kumar
Assignee: Nanda kumar


Whenever an internal sub-module is added as a dependency the scope has to be 
set to {{provided}}.
If the scope is not mentioned it falls back to default scope which is 
{{compile}}, this makes the dependency jar (sub-module jar) to be copied to 
{{share//lib}} directory while packaging. Since we use {{copyifnotexists}} 
logic, the binary jar of the actual sub-module will not be copied. This will 
result in the jar being placed in the wrong location inside the distribution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13390) AccessControlException for overwrite but not for delete

2018-04-03 Thread Nasir Ali (JIRA)
Nasir Ali created HDFS-13390:


 Summary: AccessControlException for overwrite but not for delete
 Key: HDFS-13390
 URL: https://issues.apache.org/jira/browse/HDFS-13390
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 2.9.0
 Environment: *Environment:*
OS: Centos
PyArrow Version: 0.8.0
Python version: 3.6
HDFS: 2.9
Reporter: Nasir Ali


 

*Problem:*
I have a file (F-1) saved in HDFS with permissions set to "-rw-r--r--" with 
user "cnali". User "nndugudi" cannot overwrite F-1 (vice versa). hdfs.write 
will generate following exception:

org.apache.hadoop.security.AccessControlException: Permission denied: 
user=nndugudi, access=WRITE, 
inode="/cerebralcortex/data/-f81c-44d2-9db8-fea69f468d58/-5087-3d56-ad0e-0b27c3c83182/20171105.gz":cnali:supergroup:-rw-r--r--

However, user "nndugudi" can delete the file without any problem. So why 
overwriting file will produce AccessControlException and not the delete method?

*Sample Code*:
File: 
[https://github.com/MD2Korg/CerebralCortex/blob/master/cerebralcortex/core/data_manager/raw/stream_handler.py]

LOC: 659-705 (write_hdfs_day_file)

 

*HDFS Configurations*:

All configurations are set to default. Security is also disabled as of now.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13387) Make multi-thread access class BlockInfoContiguous thread safe

2018-04-03 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424032#comment-16424032
 ] 

Rushabh S Shah commented on HDFS-13387:
---

If you plan to revive work  on HDFS-8966, then please have a design doc there.
It will greatly help people to understand and review.
Thanks.

> Make multi-thread access class BlockInfoContiguous thread safe
> --
>
> Key: HDFS-13387
> URL: https://issues.apache.org/jira/browse/HDFS-13387
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>
> This jira will lead us to make the class as BlockInfoContiguous thread-safe, 
> then, we should not use the NameSystemLock to lock the full flow. This just a 
> base step to achieve the plan of HDFS-8966



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13389) Ozone: Compile Ozone/HDFS/Cblock protobuf files with proto3 compiler using maven protoc plugin

2018-04-03 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDFS-13389:


 Summary: Ozone: Compile Ozone/HDFS/Cblock protobuf files with 
proto3 compiler using maven protoc plugin
 Key: HDFS-13389
 URL: https://issues.apache.org/jira/browse/HDFS-13389
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: HDFS-7240


Currently all the Ozone/HDFS/Cblock proto files are compiled using proto 2.5, 
this can be changed to use proto3 compiler.

This change will help in performance improvement as well because currently in 
the client path, the xceiver client ratis converts proto2 classes to proto3 
using byte string manipulation.

Please note that for rest of hadoop (except Ozone/Cblock/HDSL), the protoc 
version will still remain 2.5 as this proto compilation will be done through 
the following plugin. 
https://www.xolstice.org/protobuf-maven-plugin/





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13382) Allocator does not initialize lotSize during hdfs mover process

2018-04-03 Thread Qingxin Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingxin Wu updated HDFS-13382:
--
Description: 
Currently, when we execute 
{code:java}
hdfs mover -p /some/path
{code}
the moverThreadAllocator in org.apache.hadoop.hdfs.server.balancer.Dispatcher 
does not initialize lotSize according to  _dfs.mover.moverThreads and 
dfs.datanode.balance.max.concurrent.moves._  So, when we invoke 
moverThreadAllocator.allocate() method, it will always return 1.

 

 

 

  was:
Currently, when we execute 
{code:java}
hdfs mover -p /some/path
{code}
the moverThreadAllocator in org.apache.hadoop.hdfs.server.balancer.Dispatcher 
does not initialize lotSize according to  _dfs.mover.moverThreads and 
dfs.datanode.balance.max.concurrent.moves._ 

 

 

 


> Allocator does not initialize lotSize during hdfs mover process
> ---
>
> Key: HDFS-13382
> URL: https://issues.apache.org/jira/browse/HDFS-13382
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 2.7.5
>Reporter: Qingxin Wu
>Priority: Major
> Attachments: HDFS-13382.001.patch
>
>
> Currently, when we execute 
> {code:java}
> hdfs mover -p /some/path
> {code}
> the moverThreadAllocator in org.apache.hadoop.hdfs.server.balancer.Dispatcher 
> does not initialize lotSize according to  _dfs.mover.moverThreads and 
> dfs.datanode.balance.max.concurrent.moves._  So, when we invoke 
> moverThreadAllocator.allocate() method, it will always return 1.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13079) Provide a config to start namenode in safemode state upto a certain transaction id

2018-04-03 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee reassigned HDFS-13079:
--

Assignee: Shashikant Banerjee  (was: Mukul Kumar Singh)

> Provide a config to start namenode in safemode state upto a certain 
> transaction id
> --
>
> Key: HDFS-13079
> URL: https://issues.apache.org/jira/browse/HDFS-13079
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13079.001.patch
>
>
> In some cases it necessary to rollback the Namenode back to a certain 
> transaction id. This is especially needed when the user issues a {{rm -Rf 
> -skipTrash}} by mistake.
> Rolling back to a transaction id helps in taking a peek at the filesystem at 
> a particular instant. This jira proposes to provide a configuration variable 
> using which the namenode can be started upto a certain transaction id. The 
> filesystem will be in a readonly safemode which cannot be overridden 
> manually. It will only be overridden by removing the config value from the 
> config file. Please also note that this will not cause any changes in the 
> filesystem state, the filesystem will be in safemode state and no changes to 
> the filesystem state will be allowed.
> Please note that in case a checkpoint has already happened and the requested 
> transaction id has been subsumed in an FSImage, then the namenode will be 
> started with the next nearest transaction id. Further FSImage files and edits 
> will be ignored.
> If the checkpoint hasn't happen then the namenode will be started with the 
> exact transaction id.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423945#comment-16423945
 ] 

genericqa commented on HDFS-13388:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 4 new + 7 unchanged - 0 fixed = 11 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
27s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13388 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917346/HADOOP-13388.0001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b5524e328135 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2be64eb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23760/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23760/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23760/console |
| Powered by | Apache Yetus 

[jira] [Commented] (HDFS-13348) Ozone: Update IP and hostname in Datanode from SCM's response to the register call

2018-04-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423940#comment-16423940
 ] 

genericqa commented on HDFS-13348:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 5s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
15s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
17s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
13s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
15s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
15s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
11s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
11s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
11s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
10s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
14s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
14s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
12s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
13s{color} | {color:red} server-scm in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 13s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 13s{color} 
| {color:red} 

[jira] [Commented] (HDFS-13348) Ozone: Update IP and hostname in Datanode from SCM's response to the register call

2018-04-03 Thread Shashikant Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423874#comment-16423874
 ] 

Shashikant Banerjee commented on HDFS-13348:


Thanks [~nandakumar131], for the review. Patch v1 addresses your review 
comments.

> Ozone: Update IP and hostname in Datanode from SCM's response to the register 
> call
> --
>
> Key: HDFS-13348
> URL: https://issues.apache.org/jira/browse/HDFS-13348
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13348-HDFS-7240.000.patch, 
> HDFS-13348-HDFS-7240.001.patch
>
>
> Whenever a Datanode registers with SCM, the SCM resolves the IP address and 
> hostname of the Datanode form the RPC call. This IP address and hostname 
> should be sent back to Datanode in the response to register call and the 
> Datanode has to update the values from the response to its 
> {{DatanodeDetails}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13348) Ozone: Update IP and hostname in Datanode from SCM's response to the register call

2018-04-03 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-13348:
---
Attachment: HDFS-13348-HDFS-7240.001.patch

> Ozone: Update IP and hostname in Datanode from SCM's response to the register 
> call
> --
>
> Key: HDFS-13348
> URL: https://issues.apache.org/jira/browse/HDFS-13348
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13348-HDFS-7240.000.patch, 
> HDFS-13348-HDFS-7240.001.patch
>
>
> Whenever a Datanode registers with SCM, the SCM resolves the IP address and 
> hostname of the Datanode form the RPC call. This IP address and hostname 
> should be sent back to Datanode in the response to register call and the 
> Datanode has to update the values from the response to its 
> {{DatanodeDetails}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-03 Thread Jinglun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-13388:
---
Status: Patch Available  (was: Open)

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Priority: Major
> Attachments: HADOOP-13388.0001.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-03 Thread Jinglun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-13388:
---
Description: 
In HDFS-7858 RequestHedgingProxyProvider was designed to "first simultaneously 
call multiple configured NNs to decide which is the active Namenode and then 
for subsequent calls it will invoke the previously successful NN ." But the 
current code call multiple configured NNs every time even when we already got 
the successful NN. 
 That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
proxyInfo is assigned only when it is constructed or when failover occurs. 
RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the only 
proxy we can get is always a dynamic proxy handled by 
RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
handles invoked method by calling multiple configured NNs.

  was:
In HDFS-7858 RequestHedgingProxyProvider was designed to "first simultaneously 
call multiple configured NNs to decide which is the active Namenode and then 
for subsequent calls it will invoke the previously successful NN ." But the 
current code call multiple configured NNs every time even when we already got 
the successful NN. 
 That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
proxyInfo is assigned only when it is constructed or when failover occurs. 
RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the only 
proxy we can get is always a dynamic proxy handled by 
RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
handles invoked method by call multiple configured NNs.


> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Priority: Major
> Attachments: HADOOP-13388.0001.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-03 Thread Jinglun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-13388:
---
Description: 
In HDFS-7858 RequestHedgingProxyProvider was designed to "first simultaneously 
call multiple configured NNs to decide which is the active Namenode and then 
for subsequent calls it will invoke the previously successful NN ." But the 
current code call multiple configured NNs every time even when we already got 
the successful NN. 
 That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
proxyInfo is assigned only when it is constructed or when failover occurs. 
RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the only 
proxy we can get is always a dynamic proxy handled by 
RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
handles invoked method by call multiple configured NNs.

  was:
In HDFS-7858 RequestHedgingProxyProvider was designed to "first simultaneously 
call multiple configured NNs to decide which is the active Namenode and then 
for subsequent calls it will invoke the previously successful NN ." But the 
current code call multiple configured NNs every time even when we already got 
the successful NN. 
That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
proxyInfo is assigned only when it is constructed or when failover occurs. 
RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the only 
proxy we can get is always a dynamic proxy handled by 
RequestHedgingInvocationHandler.class. It handles method invoke by call 
multiple configured NNs.


> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Priority: Major
> Attachments: HADOOP-13388.0001.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by call multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-03 Thread Jinglun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-13388:
---
Attachment: HADOOP-13388.0001.patch

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Priority: Major
> Attachments: HADOOP-13388.0001.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
> That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. It handles method invoke by call 
> multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-03 Thread Jinglun (JIRA)
Jinglun created HDFS-13388:
--

 Summary: RequestHedgingProxyProvider calls multiple configured NNs 
all the time
 Key: HDFS-13388
 URL: https://issues.apache.org/jira/browse/HDFS-13388
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Jinglun


In HDFS-7858 RequestHedgingProxyProvider was designed to "first simultaneously 
call multiple configured NNs to decide which is the active Namenode and then 
for subsequent calls it will invoke the previously successful NN ." But the 
current code call multiple configured NNs every time even when we already got 
the successful NN. 
That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
proxyInfo is assigned only when it is constructed or when failover occurs. 
RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the only 
proxy we can get is always a dynamic proxy handled by 
RequestHedgingInvocationHandler.class. It handles method invoke by call 
multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13387) Make multi-thread access class BlockInfoContiguous thread safe

2018-04-03 Thread maobaolong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maobaolong updated HDFS-13387:
--
Summary: Make multi-thread access class BlockInfoContiguous thread safe  
(was: Make multi-thread access class thread safe)

> Make multi-thread access class BlockInfoContiguous thread safe
> --
>
> Key: HDFS-13387
> URL: https://issues.apache.org/jira/browse/HDFS-13387
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>
> This jira will lead us to make the class as BlockInfoContiguous thread-safe, 
> then, we should not use the NameSystemLock to lock the full flow. This just a 
> base step to achieve the plan of HDFS-8966



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-13387) Make multi-thread access class thread safe

2018-04-03 Thread maobaolong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13387 started by maobaolong.
-
> Make multi-thread access class thread safe
> --
>
> Key: HDFS-13387
> URL: https://issues.apache.org/jira/browse/HDFS-13387
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>
> This jira will lead us to make the class as BlockInfoContiguous thread-safe, 
> then, we should not use the NameSystemLock to lock the full flow. This just a 
> base step to achieve the plan of HDFS-8966



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13387) Make multi-thread access class thread safe

2018-04-03 Thread maobaolong (JIRA)
maobaolong created HDFS-13387:
-

 Summary: Make multi-thread access class thread safe
 Key: HDFS-13387
 URL: https://issues.apache.org/jira/browse/HDFS-13387
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.2.0
Reporter: maobaolong
Assignee: maobaolong


This jira will lead us to make the class as BlockInfoContiguous thread-safe, 
then, we should not use the NameSystemLock to lock the full flow. This just a 
base step to achieve the plan of HDFS-8966



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >