[jira] [Created] (HDDS-155) Implement KeyValueContainer

2018-06-05 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-155:
---

 Summary: Implement KeyValueContainer
 Key: HDDS-155
 URL: https://issues.apache.org/jira/browse/HDDS-155
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


This Jira is to add following:
 # Implement Container Interface
 # Use new directory layout proposed in the design document.
a. Data location (chunks)
b. Meta location (DB and .container files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13511) Provide specialized exception when block length cannot be obtained

2018-06-05 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502815#comment-16502815
 ] 

Hudson commented on HDFS-13511:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #14370 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14370/])
HDFS-13511. Provide specialized exception when block length cannot be (xiao: 
rev 774c1f199e11d886d0c0a1069325f0284da35deb)
* (add) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/CannotObtainBlockLengthException.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java


> Provide specialized exception when block length cannot be obtained
> --
>
> Key: HDFS-13511
> URL: https://issues.apache.org/jira/browse/HDFS-13511
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13511.001.patch, HDFS-13511.002.patch, 
> HDFS-13511.003.patch
>
>
> In downstream project, I saw the following code:
> {code}
> FSDataInputStream inputStream = hdfs.open(new Path(path));
> ...
> if (options.getRecoverFailedOpen() && dfs != null && 
> e.getMessage().toLowerCase()
> .startsWith("cannot obtain block length for")) {
> {code}
> The above tightly depends on the following in DFSInputStream#readBlockLength
> {code}
> throw new IOException("Cannot obtain block length for " + locatedblock);
> {code}
> The check based on string matching is brittle in production deployment.
> After discussing with [~ste...@apache.org], better approach is to introduce 
> specialized IOException, e.g. CannotObtainBlockLengthException so that 
> downstream project doesn't have to rely on string matching.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13511) Provide specialized exception when block length cannot be obtained

2018-06-05 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13511:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks Gabor for the contribution, and others for 
ideas/reviews.

> Provide specialized exception when block length cannot be obtained
> --
>
> Key: HDFS-13511
> URL: https://issues.apache.org/jira/browse/HDFS-13511
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13511.001.patch, HDFS-13511.002.patch, 
> HDFS-13511.003.patch
>
>
> In downstream project, I saw the following code:
> {code}
> FSDataInputStream inputStream = hdfs.open(new Path(path));
> ...
> if (options.getRecoverFailedOpen() && dfs != null && 
> e.getMessage().toLowerCase()
> .startsWith("cannot obtain block length for")) {
> {code}
> The above tightly depends on the following in DFSInputStream#readBlockLength
> {code}
> throw new IOException("Cannot obtain block length for " + locatedblock);
> {code}
> The check based on string matching is brittle in production deployment.
> After discussing with [~ste...@apache.org], better approach is to introduce 
> specialized IOException, e.g. CannotObtainBlockLengthException so that 
> downstream project doesn't have to rely on string matching.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-123) ContainerSet class to manage ContainerMap

2018-06-05 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502800#comment-16502800
 ] 

Bharat Viswanadham edited comment on HDDS-123 at 6/6/18 4:16 AM:
-

Hi [~xyao]

I have addressed review comments.

Could you please help in reviewing the latest updated patch.


was (Author: bharatviswa):
Hi [~xyao]

I have addressed review comments.

Could you please have a look at the latest updated patch.

> ContainerSet class to manage ContainerMap 
> --
>
> Key: HDDS-123
> URL: https://issues.apache.org/jira/browse/HDDS-123
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-123-HDDS-48.00.patch, HDDS-123-HDDS-48.01.patch, 
> HDDS-123-HDDS-48.02.patch, HDDS-123-HDDS-48.03.patch, 
> HDDS-123-HDDS-48.04.patch, HDDS-123-HDDS-48.05.patch
>
>
> Create a ContainerSet class, which manages containerMap.
> Previously container map is in ContainerManagerImpl, with refactoring work it 
> should be moved to ContainerSet. 
> This class should handle add/get/remove container from containerMap.
> And also now it should handle containerReport.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-123) ContainerSet class to manage ContainerMap

2018-06-05 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502800#comment-16502800
 ] 

Bharat Viswanadham commented on HDDS-123:
-

Hi [~xyao]

I have addressed review comments.

Could you please have a look at the latest updated patch.

> ContainerSet class to manage ContainerMap 
> --
>
> Key: HDDS-123
> URL: https://issues.apache.org/jira/browse/HDDS-123
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-123-HDDS-48.00.patch, HDDS-123-HDDS-48.01.patch, 
> HDDS-123-HDDS-48.02.patch, HDDS-123-HDDS-48.03.patch, 
> HDDS-123-HDDS-48.04.patch, HDDS-123-HDDS-48.05.patch
>
>
> Create a ContainerSet class, which manages containerMap.
> Previously container map is in ContainerManagerImpl, with refactoring work it 
> should be moved to ContainerSet. 
> This class should handle add/get/remove container from containerMap.
> And also now it should handle containerReport.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13653) Make dfs.client.failover.random.order a per nameservice configuration

2018-06-05 Thread Ekanth Sethuramalingam (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502760#comment-16502760
 ] 

Ekanth Sethuramalingam commented on HDFS-13653:
---

Thanks [~elgoiri] for the comments. Addressed 3 of the 4 comments. Also handled 
a bunch of checkstyle errors that came through yetus run.
 * It could use some refactoring, currently it has 3 mocks that are pretty much 
the same in a couple test cases.

Could you help understand what you meant here? Each of the mock is a closure on 
an unique atomicInteger corresponding to the mock. Let me know how I can better 
refactor this. Attached new patch: [^HDFS-13653.006.patch].

Also the unit test failures seem unrelated as all the changes in the recent 
patch was only addition of new UTs. So I guess they are already flaky?

> Make dfs.client.failover.random.order a per nameservice configuration
> -
>
> Key: HDFS-13653
> URL: https://issues.apache.org/jira/browse/HDFS-13653
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: Ekanth Sethuramalingam
>Assignee: Ekanth Sethuramalingam
>Priority: Major
> Attachments: HDFS-13653.001.patch, HDFS-13653.002.patch, 
> HDFS-13653.003.patch, HDFS-13653.004.patch, HDFS-13653.005.patch, 
> HDFS-13653.006.patch
>
>
> Currently the dfs.client.failover.random.order is applied globally. If we 
> have a combination of router and non-router nameservice, the random order 
> should ideally be enabled only for the router based nameservice. This Jira is 
> to make this configuration per-nameservice so that this can be configured 
> independently for each nameservice. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13653) Make dfs.client.failover.random.order a per nameservice configuration

2018-06-05 Thread Ekanth Sethuramalingam (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekanth Sethuramalingam updated HDFS-13653:
--
Attachment: HDFS-13653.006.patch

> Make dfs.client.failover.random.order a per nameservice configuration
> -
>
> Key: HDFS-13653
> URL: https://issues.apache.org/jira/browse/HDFS-13653
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: Ekanth Sethuramalingam
>Assignee: Ekanth Sethuramalingam
>Priority: Major
> Attachments: HDFS-13653.001.patch, HDFS-13653.002.patch, 
> HDFS-13653.003.patch, HDFS-13653.004.patch, HDFS-13653.005.patch, 
> HDFS-13653.006.patch
>
>
> Currently the dfs.client.failover.random.order is applied globally. If we 
> have a combination of router and non-router nameservice, the random order 
> should ideally be enabled only for the router based nameservice. This Jira is 
> to make this configuration per-nameservice so that this can be configured 
> independently for each nameservice. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13653) Make dfs.client.failover.random.order a per nameservice configuration

2018-06-05 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502738#comment-16502738
 ] 

genericqa commented on HDFS-13653:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 12s{color} 
| {color:red} hadoop-hdfs-project generated 1 new + 581 unchanged - 0 fixed = 
582 total (was 581) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  5s{color} | {color:orange} hadoop-hdfs-project: The patch generated 16 new 
+ 7 unchanged - 0 fixed = 23 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
33s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 19s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
32s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}198m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.TestMaintenanceState |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13653 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926652/HDFS-13653.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 

[jira] [Commented] (HDFS-13653) Make dfs.client.failover.random.order a per nameservice configuration

2018-06-05 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502727#comment-16502727
 ] 

genericqa commented on HDFS-13653:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 15m 40s{color} 
| {color:red} hadoop-hdfs-project generated 1 new + 581 unchanged - 0 fixed = 
582 total (was 581) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-hdfs-project: The patch generated 16 new 
+ 7 unchanged - 0 fixed = 23 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
28s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}214m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13653 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926652/HDFS-13653.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 1f4dabed3201 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| 

[jira] [Commented] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-06-05 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502704#comment-16502704
 ] 

genericqa commented on HDFS-13448:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 40m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 40m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 40m 44s{color} 
| {color:red} root generated 2 new + 1487 unchanged - 0 fixed = 1489 total (was 
1487) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 23s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
3s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 4s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}290m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestTrash |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestDFSOutputStream |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13448 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926622/HDFS-13448.9.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  

[jira] [Commented] (HDFS-13121) NPE when request file descriptors when SC read

2018-06-05 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502686#comment-16502686
 ] 

Wei-Chiu Chuang commented on HDFS-13121:


The other thing is you should close the cluster object at the end of test.

It's probably a better idea to use 

 
{code:java}
try (MiniDFSCluster cluster = ...) {
 ...
}
{code}
 

Otherwise the rest of code looks good to me.

> NPE when request file descriptors when SC read
> --
>
> Key: HDFS-13121
> URL: https://issues.apache.org/jira/browse/HDFS-13121
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Gang Xie
>Assignee: Zsolt Venczel
>Priority: Minor
> Attachments: HDFS-13121.01.patch, HDFS-13121.02.patch, 
> HDFS-13121.03.patch, test-only.patch
>
>
> Recently, we hit an issue that the DFSClient throws NPE. The case is that, 
> the app process exceeds the limit of the max open file. In the case, the 
> libhadoop never throw and exception but return null to the request of fds. 
> But requestFileDescriptors use the returned fds directly without any check 
> and then NPE. 
>  
> We need add a sanity check here of null pointer.
>  
> private ShortCircuitReplicaInfo requestFileDescriptors(DomainPeer peer,
>  Slot slot) throws IOException {
>  ShortCircuitCache cache = clientContext.getShortCircuitCache();
>  final DataOutputStream out =
>  new DataOutputStream(new BufferedOutputStream(peer.getOutputStream()));
>  SlotId slotId = slot == null ? null : slot.getSlotId();
>  new Sender(out).requestShortCircuitFds(block, token, slotId, 1,
>  failureInjector.getSupportsReceiptVerification());
>  DataInputStream in = new DataInputStream(peer.getInputStream());
>  BlockOpResponseProto resp = BlockOpResponseProto.parseFrom(
>  PBHelperClient.vintPrefixed(in));
>  DomainSocket sock = peer.getDomainSocket();
>  failureInjector.injectRequestFileDescriptorsFailure();
>  switch (resp.getStatus()) {
>  case SUCCESS:
>  byte buf[] = new byte[1];
>  FileInputStream[] fis = new FileInputStream[2];
>  {color:#d04437}sock.recvFileInputStreams(fis, buf, 0, buf.length);{color}
>  ShortCircuitReplica replica = null;
>  try {
>  ExtendedBlockId key =
>  new ExtendedBlockId(block.getBlockId(), block.getBlockPoolId());
>  if (buf[0] == USE_RECEIPT_VERIFICATION.getNumber()) {
>  LOG.trace("Sending receipt verification byte for slot {}", slot);
>  sock.getOutputStream().write(0);
>  }
>  {color:#d04437}replica = new ShortCircuitReplica(key, fis[0], fis[1], 
> cache,{color}
> {color:#d04437} Time.monotonicNow(), slot);{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13121) NPE when request file descriptors when SC read

2018-06-05 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502686#comment-16502686
 ] 

Wei-Chiu Chuang edited comment on HDFS-13121 at 6/6/18 12:15 AM:
-

The other thing is you should close the cluster object at the end of test.

It's probably a better idea to use 

 
{code:java}
try (MiniDFSCluster cluster = ...) {
 ...
}
{code}
 

Other than those, the rest of code looks good to me.


was (Author: jojochuang):
The other thing is you should close the cluster object at the end of test.

It's probably a better idea to use 

 
{code:java}
try (MiniDFSCluster cluster = ...) {
 ...
}
{code}
 

Otherwise the rest of code looks good to me.

> NPE when request file descriptors when SC read
> --
>
> Key: HDFS-13121
> URL: https://issues.apache.org/jira/browse/HDFS-13121
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Gang Xie
>Assignee: Zsolt Venczel
>Priority: Minor
> Attachments: HDFS-13121.01.patch, HDFS-13121.02.patch, 
> HDFS-13121.03.patch, test-only.patch
>
>
> Recently, we hit an issue that the DFSClient throws NPE. The case is that, 
> the app process exceeds the limit of the max open file. In the case, the 
> libhadoop never throw and exception but return null to the request of fds. 
> But requestFileDescriptors use the returned fds directly without any check 
> and then NPE. 
>  
> We need add a sanity check here of null pointer.
>  
> private ShortCircuitReplicaInfo requestFileDescriptors(DomainPeer peer,
>  Slot slot) throws IOException {
>  ShortCircuitCache cache = clientContext.getShortCircuitCache();
>  final DataOutputStream out =
>  new DataOutputStream(new BufferedOutputStream(peer.getOutputStream()));
>  SlotId slotId = slot == null ? null : slot.getSlotId();
>  new Sender(out).requestShortCircuitFds(block, token, slotId, 1,
>  failureInjector.getSupportsReceiptVerification());
>  DataInputStream in = new DataInputStream(peer.getInputStream());
>  BlockOpResponseProto resp = BlockOpResponseProto.parseFrom(
>  PBHelperClient.vintPrefixed(in));
>  DomainSocket sock = peer.getDomainSocket();
>  failureInjector.injectRequestFileDescriptorsFailure();
>  switch (resp.getStatus()) {
>  case SUCCESS:
>  byte buf[] = new byte[1];
>  FileInputStream[] fis = new FileInputStream[2];
>  {color:#d04437}sock.recvFileInputStreams(fis, buf, 0, buf.length);{color}
>  ShortCircuitReplica replica = null;
>  try {
>  ExtendedBlockId key =
>  new ExtendedBlockId(block.getBlockId(), block.getBlockPoolId());
>  if (buf[0] == USE_RECEIPT_VERIFICATION.getNumber()) {
>  LOG.trace("Sending receipt verification byte for slot {}", slot);
>  sock.getOutputStream().write(0);
>  }
>  {color:#d04437}replica = new ShortCircuitReplica(key, fis[0], fis[1], 
> cache,{color}
> {color:#d04437} Time.monotonicNow(), slot);{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13653) Make dfs.client.failover.random.order a per nameservice configuration

2018-06-05 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502632#comment-16502632
 ] 

Íñigo Goiri edited comment on HDFS-13653 at 6/5/18 11:09 PM:
-

Thanks [~ekanth], the unit test is exactly what I was talking about.
A couple comments:
* It will complain about the missing license.
* We probably should check nnXCount.get() > 0 in the random case.
* It could use some refactoring, currently it has 3 mocks that are pretty much 
the same in a couple test cases.
* numIterations could be a constant.


was (Author: elgoiri):
Thanks [~ekanth], the unit test is exactly what I was talking about.
A couple comments:
* It will complain about the missing license.
* We probably should check nnXCount.get() > 0 in the random case.
* It could use some refactoring, currently it has 3 mocks that are pretty much 
the same in a couple test cases.

> Make dfs.client.failover.random.order a per nameservice configuration
> -
>
> Key: HDFS-13653
> URL: https://issues.apache.org/jira/browse/HDFS-13653
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: Ekanth Sethuramalingam
>Assignee: Ekanth Sethuramalingam
>Priority: Major
> Attachments: HDFS-13653.001.patch, HDFS-13653.002.patch, 
> HDFS-13653.003.patch, HDFS-13653.004.patch, HDFS-13653.005.patch
>
>
> Currently the dfs.client.failover.random.order is applied globally. If we 
> have a combination of router and non-router nameservice, the random order 
> should ideally be enabled only for the router based nameservice. This Jira is 
> to make this configuration per-nameservice so that this can be configured 
> independently for each nameservice. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13653) Make dfs.client.failover.random.order a per nameservice configuration

2018-06-05 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502632#comment-16502632
 ] 

Íñigo Goiri commented on HDFS-13653:


Thanks [~ekanth], the unit test is exactly what I was talking about.
A couple comments:
* It will complain about the missing license.
* We probably should check nnXCount.get() > 0 in the random case.
* It could use some refactoring, currently it has 3 mocks that are pretty much 
the same in a couple test cases.

> Make dfs.client.failover.random.order a per nameservice configuration
> -
>
> Key: HDFS-13653
> URL: https://issues.apache.org/jira/browse/HDFS-13653
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: Ekanth Sethuramalingam
>Assignee: Ekanth Sethuramalingam
>Priority: Major
> Attachments: HDFS-13653.001.patch, HDFS-13653.002.patch, 
> HDFS-13653.003.patch, HDFS-13653.004.patch, HDFS-13653.005.patch
>
>
> Currently the dfs.client.failover.random.order is applied globally. If we 
> have a combination of router and non-router nameservice, the random order 
> should ideally be enabled only for the router based nameservice. This Jira is 
> to make this configuration per-nameservice so that this can be configured 
> independently for each nameservice. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-123) ContainerSet class to manage ContainerMap

2018-06-05 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502616#comment-16502616
 ] 

Hanisha Koneru commented on HDDS-123:
-

Thanks [~bharatviswa]. 

+1 for patch v05.

> ContainerSet class to manage ContainerMap 
> --
>
> Key: HDDS-123
> URL: https://issues.apache.org/jira/browse/HDDS-123
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-123-HDDS-48.00.patch, HDDS-123-HDDS-48.01.patch, 
> HDDS-123-HDDS-48.02.patch, HDDS-123-HDDS-48.03.patch, 
> HDDS-123-HDDS-48.04.patch, HDDS-123-HDDS-48.05.patch
>
>
> Create a ContainerSet class, which manages containerMap.
> Previously container map is in ContainerManagerImpl, with refactoring work it 
> should be moved to ContainerSet. 
> This class should handle add/get/remove container from containerMap.
> And also now it should handle containerReport.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13536) [PROVIDED Storage] HA for InMemoryAliasMap

2018-06-05 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502612#comment-16502612
 ] 

genericqa commented on HDFS-13536:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 26m 36s{color} 
| {color:red} root generated 1 new + 1487 unchanged - 0 fixed = 1488 total (was 
1487) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 47s{color} | {color:orange} root: The patch generated 14 new + 719 unchanged 
- 3 fixed = 733 total (was 722) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 41s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
41s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}230m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.TestRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13536 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926616/HDFS-13536.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  

[jira] [Commented] (HDFS-13607) [Edit Tail Fast Path Pt 1] Enhance JournalNode with an in-memory cache of recent edit transactions

2018-06-05 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502603#comment-16502603
 ] 

genericqa commented on HDFS-13607:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
13s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 53s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 530 unchanged - 
1 fixed = 531 total (was 531) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 434 unchanged - 0 fixed = 436 total (was 434) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}167m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13607 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926620/HDFS-13607-HDFS-12943.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 716eb0536474 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-12943 / 9a52c63 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| javac | 

[jira] [Commented] (HDFS-13607) [Edit Tail Fast Path Pt 1] Enhance JournalNode with an in-memory cache of recent edit transactions

2018-06-05 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502591#comment-16502591
 ] 

Chao Sun commented on HDFS-13607:
-

Ah sorry [~xkrogen]. You are right. I miss read the lines. The patch looks good 
to me then.

> [Edit Tail Fast Path Pt 1] Enhance JournalNode with an in-memory cache of 
> recent edit transactions
> --
>
> Key: HDFS-13607
> URL: https://issues.apache.org/jira/browse/HDFS-13607
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, journal-node
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13607-HDFS-12943.000.patch, 
> HDFS-13607-HDFS-12943.001.patch, HDFS-13607-HDFS-12943.002.patch, 
> HDFS-13607-HDFS-12943.003.patch
>
>
> See HDFS-13150 for full design.
> This JIRA is to add the in-memory cache of recent edit transactions on the 
> JournalNode. This JIRA does not include accesses to this cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13653) Make dfs.client.failover.random.order a per nameservice configuration

2018-06-05 Thread Ekanth Sethuramalingam (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502567#comment-16502567
 ] 

Ekanth Sethuramalingam commented on HDFS-13653:
---

[~csun] pointed out the file name convention for tests. Update patch here: 
[^HDFS-13653.005.patch].

> Make dfs.client.failover.random.order a per nameservice configuration
> -
>
> Key: HDFS-13653
> URL: https://issues.apache.org/jira/browse/HDFS-13653
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: Ekanth Sethuramalingam
>Assignee: Ekanth Sethuramalingam
>Priority: Major
> Attachments: HDFS-13653.001.patch, HDFS-13653.002.patch, 
> HDFS-13653.003.patch, HDFS-13653.004.patch, HDFS-13653.005.patch
>
>
> Currently the dfs.client.failover.random.order is applied globally. If we 
> have a combination of router and non-router nameservice, the random order 
> should ideally be enabled only for the router based nameservice. This Jira is 
> to make this configuration per-nameservice so that this can be configured 
> independently for each nameservice. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13653) Make dfs.client.failover.random.order a per nameservice configuration

2018-06-05 Thread Ekanth Sethuramalingam (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekanth Sethuramalingam updated HDFS-13653:
--
Attachment: HDFS-13653.005.patch

> Make dfs.client.failover.random.order a per nameservice configuration
> -
>
> Key: HDFS-13653
> URL: https://issues.apache.org/jira/browse/HDFS-13653
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: Ekanth Sethuramalingam
>Assignee: Ekanth Sethuramalingam
>Priority: Major
> Attachments: HDFS-13653.001.patch, HDFS-13653.002.patch, 
> HDFS-13653.003.patch, HDFS-13653.004.patch, HDFS-13653.005.patch
>
>
> Currently the dfs.client.failover.random.order is applied globally. If we 
> have a combination of router and non-router nameservice, the random order 
> should ideally be enabled only for the router based nameservice. This Jira is 
> to make this configuration per-nameservice so that this can be configured 
> independently for each nameservice. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13653) Make dfs.client.failover.random.order a per nameservice configuration

2018-06-05 Thread Ekanth Sethuramalingam (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502547#comment-16502547
 ] 

Ekanth Sethuramalingam commented on HDFS-13653:
---

Thanks for the suggestions  [~elgoiri]. I have updated the patch with unit 
tests for the ConfiguredFailoverProxyProvider class. I ran the test ~25 times 
it passed every time. With 50 iterations, it should be fairly resilient to any 
flakiness. Appreciate your review.

> Make dfs.client.failover.random.order a per nameservice configuration
> -
>
> Key: HDFS-13653
> URL: https://issues.apache.org/jira/browse/HDFS-13653
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: Ekanth Sethuramalingam
>Assignee: Ekanth Sethuramalingam
>Priority: Major
> Attachments: HDFS-13653.001.patch, HDFS-13653.002.patch, 
> HDFS-13653.003.patch, HDFS-13653.004.patch
>
>
> Currently the dfs.client.failover.random.order is applied globally. If we 
> have a combination of router and non-router nameservice, the random order 
> should ideally be enabled only for the router based nameservice. This Jira is 
> to make this configuration per-nameservice so that this can be configured 
> independently for each nameservice. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13653) Make dfs.client.failover.random.order a per nameservice configuration

2018-06-05 Thread Ekanth Sethuramalingam (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekanth Sethuramalingam updated HDFS-13653:
--
Attachment: HDFS-13653.004.patch

> Make dfs.client.failover.random.order a per nameservice configuration
> -
>
> Key: HDFS-13653
> URL: https://issues.apache.org/jira/browse/HDFS-13653
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: Ekanth Sethuramalingam
>Assignee: Ekanth Sethuramalingam
>Priority: Major
> Attachments: HDFS-13653.001.patch, HDFS-13653.002.patch, 
> HDFS-13653.003.patch, HDFS-13653.004.patch
>
>
> Currently the dfs.client.failover.random.order is applied globally. If we 
> have a combination of router and non-router nameservice, the random order 
> should ideally be enabled only for the router based nameservice. This Jira is 
> to make this configuration per-nameservice so that this can be configured 
> independently for each nameservice. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-06-05 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502399#comment-16502399
 ] 

BELUGA BEHR edited comment on HDFS-13448 at 6/5/18 7:57 PM:


I took a look at mocking but it is really hacky and forced.  These tests are 
not designed with mocking in mind, as seen by the use of the deprecated 
{{Whitebox.setInternalState}} call.  I just feel that these suite of tests are 
a little bit more integration related and it would make the test entirely too 
fragile if we start trying to force internal magic.  At this level, we're 
testing the overall functionality, not the individual components.


was (Author: belugabehr):
I took a look at mocking but it is really hacky and forced.  These tests are 
not designed with mocking in mind, as seen by the use of the deprecated 
{{Whitebox.setInternalState}} call.  I just feel that these suite of tests are 
a little bit more integration related and it would make the test entirely too 
fragile if we start trying to force internal magic.

> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 2.9.0, 3.0.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13448.1.patch, HDFS-13448.2.patch, 
> HDFS-13448.3.patch, HDFS-13448.4.patch, HDFS-13448.5.patch, 
> HDFS-13448.6.patch, HDFS-13448.7.patch, HDFS-13448.8.patch, HDFS-13448.9.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-06-05 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502399#comment-16502399
 ] 

BELUGA BEHR commented on HDFS-13448:


I took a look at mocking but it is really hacky and forced.  These tests are 
not designed with mocking in mind, as seen by the use of the deprecated 
{{Whitebox.setInternalState}} call.  I just feel that these suite of tests are 
a little bit more integration related and it would make the test entirely too 
fragile if we start trying to force internal magic.

> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 2.9.0, 3.0.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13448.1.patch, HDFS-13448.2.patch, 
> HDFS-13448.3.patch, HDFS-13448.4.patch, HDFS-13448.5.patch, 
> HDFS-13448.6.patch, HDFS-13448.7.patch, HDFS-13448.8.patch, HDFS-13448.9.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13265) MiniDFSCluster should set reasonable defaults to reduce resource consumption

2018-06-05 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502395#comment-16502395
 ] 

genericqa commented on HDFS-13265:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-13265 does not apply to branch-2. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13265 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914519/HDFS-13265-branch-2.000.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24388/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> MiniDFSCluster should set reasonable defaults to reduce resource consumption
> 
>
> Key: HDFS-13265
> URL: https://issues.apache.org/jira/browse/HDFS-13265
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, test
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13265-branch-2.000.patch, 
> HDFS-13265-branch-2.000.patch, HDFS-13265.000.patch, 
> TestMiniDFSClusterThreads.java
>
>
> MiniDFSCluster takes its defaults from {{DFSConfigKeys}} defaults, but many 
> of these are not suitable for a unit test environment. For example, the 
> default handler thread count of 10 is definitely more than necessary for 
> (almost?) any unit test. We should set reasonable, lower defaults unless a 
> test specifically requires more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13607) [Edit Tail Fast Path Pt 1] Enhance JournalNode with an in-memory cache of recent edit transactions

2018-06-05 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502392#comment-16502392
 ] 

Chen Liang commented on HDFS-13607:
---

Thanks for the followup [~xkrogen]. Sorry for not clarifying it. Yes, that is 
what I meant. Seemed we could have log that indicates having more entries in 
the buffer than the configured capacity, which could be confusing when 
debugging by looking at this log.

> [Edit Tail Fast Path Pt 1] Enhance JournalNode with an in-memory cache of 
> recent edit transactions
> --
>
> Key: HDFS-13607
> URL: https://issues.apache.org/jira/browse/HDFS-13607
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, journal-node
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13607-HDFS-12943.000.patch, 
> HDFS-13607-HDFS-12943.001.patch, HDFS-13607-HDFS-12943.002.patch, 
> HDFS-13607-HDFS-12943.003.patch
>
>
> See HDFS-13150 for full design.
> This JIRA is to add the in-memory cache of recent edit transactions on the 
> JournalNode. This JIRA does not include accesses to this cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-06-05 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-13448:
---
Status: Patch Available  (was: Open)

Updated test to not include a sleep.  I can look at updating unit test to use 
mocks, but could use help.

> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 3.0.1, 2.9.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13448.1.patch, HDFS-13448.2.patch, 
> HDFS-13448.3.patch, HDFS-13448.4.patch, HDFS-13448.5.patch, 
> HDFS-13448.6.patch, HDFS-13448.7.patch, HDFS-13448.8.patch, HDFS-13448.9.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-06-05 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-13448:
---
Status: Open  (was: Patch Available)

> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 3.0.1, 2.9.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13448.1.patch, HDFS-13448.2.patch, 
> HDFS-13448.3.patch, HDFS-13448.4.patch, HDFS-13448.5.patch, 
> HDFS-13448.6.patch, HDFS-13448.7.patch, HDFS-13448.8.patch, HDFS-13448.9.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-06-05 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-13448:
---
Attachment: HDFS-13448.9.patch

> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 2.9.0, 3.0.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13448.1.patch, HDFS-13448.2.patch, 
> HDFS-13448.3.patch, HDFS-13448.4.patch, HDFS-13448.5.patch, 
> HDFS-13448.6.patch, HDFS-13448.7.patch, HDFS-13448.8.patch, HDFS-13448.9.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13265) MiniDFSCluster should set reasonable defaults to reduce resource consumption

2018-06-05 Thread Chris Douglas (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502361#comment-16502361
 ] 

Chris Douglas commented on HDFS-13265:
--

Sorry for the intermittent attention to the prerequisites, I'm not sure I've 
paged in all the context. With HDFS-13493 and HDFS-13272 committed, is the 
remaining work in this JIRA only to use the config knobs for 
{{MiniDFSCluster}}, in branch-2?

> MiniDFSCluster should set reasonable defaults to reduce resource consumption
> 
>
> Key: HDFS-13265
> URL: https://issues.apache.org/jira/browse/HDFS-13265
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, test
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13265-branch-2.000.patch, 
> HDFS-13265-branch-2.000.patch, HDFS-13265.000.patch, 
> TestMiniDFSClusterThreads.java
>
>
> MiniDFSCluster takes its defaults from {{DFSConfigKeys}} defaults, but many 
> of these are not suitable for a unit test environment. For example, the 
> default handler thread count of 10 is definitely more than necessary for 
> (almost?) any unit test. We should set reasonable, lower defaults unless a 
> test specifically requires more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13547) Add ingress port based sasl resolver

2018-06-05 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502351#comment-16502351
 ] 

Hudson commented on HDFS-13547:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14367 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14367/])
HDFS-13547. Add ingress port based sasl resolver. Contributed by Chen (cliang: 
rev 1b0d4f4606adc78a5e43a924634d3d8506db26fa)
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/IngressPortBasedResolver.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/WhitelistBasedResolver.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPropertiesResolver.java
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestIngressPortBasedResolver.java


> Add ingress port based sasl resolver
> 
>
> Key: HDFS-13547
> URL: https://issues.apache.org/jira/browse/HDFS-13547
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Fix For: 3.1.1
>
> Attachments: HDFS-13547.001.patch, HDFS-13547.002.patch, 
> HDFS-13547.003.patch, HDFS-13547.004.patch
>
>
> This Jira extends the SASL properties resolver interface to take an ingress 
> port parameter, and also adds an implementation based on this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13607) [Edit Tail Fast Path Pt 1] Enhance JournalNode with an in-memory cache of recent edit transactions

2018-06-05 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13607:
---
Attachment: HDFS-13607-HDFS-12943.003.patch

> [Edit Tail Fast Path Pt 1] Enhance JournalNode with an in-memory cache of 
> recent edit transactions
> --
>
> Key: HDFS-13607
> URL: https://issues.apache.org/jira/browse/HDFS-13607
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, journal-node
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13607-HDFS-12943.000.patch, 
> HDFS-13607-HDFS-12943.001.patch, HDFS-13607-HDFS-12943.002.patch, 
> HDFS-13607-HDFS-12943.003.patch
>
>
> See HDFS-13150 for full design.
> This JIRA is to add the in-memory cache of recent edit transactions on the 
> JournalNode. This JIRA does not include accesses to this cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13607) [Edit Tail Fast Path Pt 1] Enhance JournalNode with an in-memory cache of recent edit transactions

2018-06-05 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502335#comment-16502335
 ] 

Erik Krogen commented on HDFS-13607:


Thanks for the reviews [~vagarychen], [~shv], [~csun]!

[~vagarychen]:
I'm not sure what you mean by confusing log warn message. Maybe you are 
thinking that when we log {{capacity}} (doesn't change), we are logging 
{{totalSize}}, which would show an over-capacity value at that time? I am not 
sure what else would be confusing about this log as-is. That being said, I 
think from a code readability standpoint it makes sense if the large items are 
removed before the new item is added, so I refactored this.

[~csun]:
The condition you mentioned is actually triggered before the buffer with 
different version is added, not after. It is {{prevBuf}} (corresponding to 
{{prevTxn}}) which eventually gets added to the {{outputBuffers}}:
{code}
if (prevBuf != null) { // True except for the first loop iteration
  outputBuffers.add(ByteBuffer.wrap(prevBuf));
  ...
}
{code}
So since the loop terminates once {{prevTxn}} does not match the correct layout 
version, this {{prevBuf}} will never be added to {{outputBuffers}}. The test 
you mentioned actually verifies this; {{assertTxnCountAndContents}} checks that 
there are no extra transactions returned beyond what it is expected (via the 
{{assertArrayEquals}}, which will fail if there are extra bytes returned). If 
you believe otherwise, can you provide a unit test snippet demonstrating the 
issue? It will definitely be a problem if what you described is true.

[~shv]:
# I removed the use of ConcurrentHashMap in favor of a regular HashMap for the 
{{headerMap}} and also renamed it to {{headerCache}} to make its purpose more 
clear. I don't think we need to move to the {{LightWeight*}} objects given the 
relatively low number of entries each map is expected to have. The layout 
version and header maps should have only a few entries, so they are negligible. 
For {{dataMap}}, there is only one entry per batch of edits. If we assume edits 
are an average of 200 bytes (standard in our environment), and that batches 
contain at least a few hundred edits, this gives us entries which are tens of 
KB in size, so the overhead of the map is very small in comparison. I did, 
however, change both TreeMaps to be defined by the NavigableMap interface to 
allow for more pluggability in the future.
# I would actually consider {{startTxnId}} to be the opposite of 
{{highestTxnId}} (this pair is lowest/highest txn IDs currently in the cache). 
I agree that the presence of both "start" and "initial" is confused, so I have 
renamed {{startTxnId}} to {{lowestTxnId}}. I think this is more clear now: 
"lowest", "highest", "initial".
# Sure, SGTM. Updated.
# Good call, thanks. Updated.


Updating with v003 patch incorporating the comments above.

> [Edit Tail Fast Path Pt 1] Enhance JournalNode with an in-memory cache of 
> recent edit transactions
> --
>
> Key: HDFS-13607
> URL: https://issues.apache.org/jira/browse/HDFS-13607
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, journal-node
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13607-HDFS-12943.000.patch, 
> HDFS-13607-HDFS-12943.001.patch, HDFS-13607-HDFS-12943.002.patch
>
>
> See HDFS-13150 for full design.
> This JIRA is to add the in-memory cache of recent edit transactions on the 
> JournalNode. This JIRA does not include accesses to this cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication

2018-06-05 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502331#comment-16502331
 ] 

genericqa commented on HDFS-12284:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 21 new + 0 unchanged - 0 fixed = 21 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 27s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
26s{color} | {color:red} The patch generated 12 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.federation.router.TestRBFConfigFields 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-12284 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926608/HDFS-12284.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux e28bded6e4b6 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / baebe4d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24385/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24385/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 

[jira] [Updated] (HDFS-13547) Add ingress port based sasl resolver

2018-06-05 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-13547:
--
   Resolution: Fixed
Fix Version/s: 3.1.1
   Status: Resolved  (was: Patch Available)

> Add ingress port based sasl resolver
> 
>
> Key: HDFS-13547
> URL: https://issues.apache.org/jira/browse/HDFS-13547
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Fix For: 3.1.1
>
> Attachments: HDFS-13547.001.patch, HDFS-13547.002.patch, 
> HDFS-13547.003.patch, HDFS-13547.004.patch
>
>
> This Jira extends the SASL properties resolver interface to take an ingress 
> port parameter, and also adds an implementation based on this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13547) Add ingress port based sasl resolver

2018-06-05 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502302#comment-16502302
 ] 

Chen Liang commented on HDFS-13547:
---

Committed to trunk, thanks [~shv] for the review!

> Add ingress port based sasl resolver
> 
>
> Key: HDFS-13547
> URL: https://issues.apache.org/jira/browse/HDFS-13547
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13547.001.patch, HDFS-13547.002.patch, 
> HDFS-13547.003.patch, HDFS-13547.004.patch
>
>
> This Jira extends the SASL properties resolver interface to take an ingress 
> port parameter, and also adds an implementation based on this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13536) [PROVIDED Storage] HA for InMemoryAliasMap

2018-06-05 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-13536:
--
Status: Patch Available  (was: Open)

> [PROVIDED Storage] HA for InMemoryAliasMap
> --
>
> Key: HDFS-13536
> URL: https://issues.apache.org/jira/browse/HDFS-13536
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-13536.001.patch
>
>
> Provide HA for the {{InMemoryLevelDBAliasMapServer}} to work with HDFS NN 
> configured in high availability. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13536) [PROVIDED Storage] HA for InMemoryAliasMap

2018-06-05 Thread Virajith Jalaparti (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502297#comment-16502297
 ] 

Virajith Jalaparti edited comment on HDFS-13536 at 6/5/18 6:50 PM:
---

 [^HDFS-13536.001.patch] enables the {{InMemoryAliasMapProtocol}} to support HA 
when the Namenodes are configured with HA. It defines a new failover proxy 
provider, {{InMemoryAliasMapFailoverProxyProvider}} to accomplish this. 
{{InMemoryAliasMapFailoverProxyProvider}} is implemented as a subclass of 
{{ConfiguredFailoverProxyProvider}} -- it has the exact functionality of 
{{ConfiguredFailoverProxyProvider}} except the address used to create the proxy 
({{dfs.provided.aliasmap.inmemory.rpc.address}} used instead of  
{{dfs.namenode.rpc-address}}).

{{NamenodeProtocols}} is not extended to include {{InMemoryAliasMapProtocol}} 
to allow for the possibility to run the InMemoryAliasMap outside the Namenode. 
While running this in the Namenode simplifies deployment, one might prefer to 
not do this to reduce the burden on the Namenode (memory, RPCs etc).

[~ehiggs], can you take a look at this?


was (Author: virajith):
 [^HDFS-13536.001.patch] enables the {{InMemoryAliasMapProtocol}} to support HA 
when the Namenodes are configured with HA. It defines a new failover proxy 
provider, {{InMemoryAliasMapFailoverProxyProvider}} to accomplish this. 
{{InMemoryAliasMapFailoverProxyProvider}} is implemented as a subclass of 
{{ConfiguredFailoverProxyProvider}} -- it has the exact functionality of 
{{ConfiguredFailoverProxyProvider}} except the address used to create the proxy 
({{dfs.provided.aliasmap.inmemory.rpc.address}} used instead of  
{{dfs.namenode.rpc-address}}).

{{NamenodeProtocols}} is not extended to include {{InMemoryAliasMapProtocol}} 
to allow for the possibility to run the InMemoryAliasMap outside the Namenode. 
While running this in the Namenode simplifies deployment, one might prefer to 
not do this to reduce the burden on the Namenode (memory, RPCs etc).

> [PROVIDED Storage] HA for InMemoryAliasMap
> --
>
> Key: HDFS-13536
> URL: https://issues.apache.org/jira/browse/HDFS-13536
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-13536.001.patch
>
>
> Provide HA for the {{InMemoryLevelDBAliasMapServer}} to work with HDFS NN 
> configured in high availability. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13536) [PROVIDED Storage] HA for InMemoryAliasMap

2018-06-05 Thread Virajith Jalaparti (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502297#comment-16502297
 ] 

Virajith Jalaparti commented on HDFS-13536:
---

 [^HDFS-13536.001.patch] enables the {{InMemoryAliasMapProtocol}} to support HA 
when the Namenodes are configured with HA. It defines a new failover proxy 
provider, {{InMemoryAliasMapFailoverProxyProvider}} to accomplish this. 
{{InMemoryAliasMapFailoverProxyProvider}} is implemented as a subclass of 
{{ConfiguredFailoverProxyProvider}} -- it has the exact functionality of 
{{ConfiguredFailoverProxyProvider}} except the address used to create the proxy 
({{dfs.provided.aliasmap.inmemory.rpc.address}} used instead of  
{{dfs.namenode.rpc-address}}).

{{NamenodeProtocols}} is not extended to include {{InMemoryAliasMapProtocol}} 
to allow for the possibility to run the InMemoryAliasMap outside the Namenode. 
While running this in the Namenode simplifies deployment, one might prefer to 
not do this to reduce the burden on the Namenode (memory, RPCs etc).

> [PROVIDED Storage] HA for InMemoryAliasMap
> --
>
> Key: HDFS-13536
> URL: https://issues.apache.org/jira/browse/HDFS-13536
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-13536.001.patch
>
>
> Provide HA for the {{InMemoryLevelDBAliasMapServer}} to work with HDFS NN 
> configured in high availability. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13536) [PROVIDED Storage] HA for InMemoryAliasMap

2018-06-05 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-13536:
--
Attachment: HDFS-13536.001.patch

> [PROVIDED Storage] HA for InMemoryAliasMap
> --
>
> Key: HDFS-13536
> URL: https://issues.apache.org/jira/browse/HDFS-13536
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-13536.001.patch
>
>
> Provide HA for the {{InMemoryLevelDBAliasMapServer}} to work with HDFS NN 
> configured in high availability. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13536) [PROVIDED Storage] HA for InMemoryAliasMap

2018-06-05 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-13536:
--
Attachment: (was: HDFS-13536.001.patch)

> [PROVIDED Storage] HA for InMemoryAliasMap
> --
>
> Key: HDFS-13536
> URL: https://issues.apache.org/jira/browse/HDFS-13536
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
>
> Provide HA for the {{InMemoryLevelDBAliasMapServer}} to work with HDFS NN 
> configured in high availability. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13536) [PROVIDED Storage] HA for InMemoryAliasMap

2018-06-05 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-13536:
--
Attachment: HDFS-13536.001.patch

> [PROVIDED Storage] HA for InMemoryAliasMap
> --
>
> Key: HDFS-13536
> URL: https://issues.apache.org/jira/browse/HDFS-13536
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-13536.001.patch
>
>
> Provide HA for the {{InMemoryLevelDBAliasMapServer}} to work with HDFS NN 
> configured in high availability. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13186) [PROVIDED Phase 2] Multipart Multinode uploader API + Implementations

2018-06-05 Thread Chris Douglas (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502249#comment-16502249
 ] 

Chris Douglas commented on HDFS-13186:
--

Only a few minor points:
* {{LocalFileSystemPathHandle}} should be in the {{org.apache.hadoop.fs}} 
package, rather than {{org.apache.hadoop}}
* The changes to {{FileSystem}} no longer appear necessary
* The filesystem field in {{FileSystemMultipartUploader}}, 
{{S3AMultipartUploader}} can be final
* In {{FileSystemMultipartUploader}}, is it an error to have entries with the 
same key in the list?
* Does the contract test for the local FS pass, after adding support for 
{{PathHandle}}?
* {{MultipartUploader}} could use some more javadoc to guide implementers

Otherwise this lgtm.

> [PROVIDED Phase 2] Multipart Multinode uploader API + Implementations
> -
>
> Key: HDFS-13186
> URL: https://issues.apache.org/jira/browse/HDFS-13186
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13186.001.patch, HDFS-13186.002.patch, 
> HDFS-13186.003.patch, HDFS-13186.004.patch, HDFS-13186.005.patch
>
>
> To write files in parallel to an external storage system as in HDFS-12090, 
> there are two approaches:
>  # Naive approach: use a single datanode per file that copies blocks locally 
> as it streams data to the external service. This requires a copy for each 
> block inside the HDFS system and then a copy for the block to be sent to the 
> external system.
>  # Better approach: Single point (e.g. Namenode or SPS style external client) 
> and Datanodes coordinate in a multipart - multinode upload.
> This system needs to work with multiple back ends and needs to coordinate 
> across the network. So we propose an API that resembles the following:
> {code:java}
> public UploadHandle multipartInit(Path filePath) throws IOException;
> public PartHandle multipartPutPart(InputStream inputStream,
> int partNumber, UploadHandle uploadId) throws IOException;
> public void multipartComplete(Path filePath,
> List> handles, 
> UploadHandle multipartUploadId) throws IOException;{code}
> Here, UploadHandle and PartHandle are opaque handlers in the vein of 
> PathHandle so they can be serialized and deserialized in hadoop-hdfs project 
> without knowledge of how to deserialize e.g. S3A's version of a UpoadHandle 
> and PartHandle.
> In an object store such as S3A, the implementation is straight forward. In 
> the case of writing multipart/multinode to HDFS, we can write each block as a 
> file part. The complete call will perform a concat on the blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13635) Incorrect message when block is not found

2018-06-05 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502240#comment-16502240
 ] 

genericqa commented on HDFS-13635:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
101 unchanged - 1 fixed = 102 total (was 102) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13635 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926582/HDFS-13635.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4407e44f7cc1 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk 

[jira] [Commented] (HDFS-13547) Add ingress port based sasl resolver

2018-06-05 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502237#comment-16502237
 ] 

Chen Liang commented on HDFS-13547:
---

The failed test is unrelated, will commit the patch shortly.

> Add ingress port based sasl resolver
> 
>
> Key: HDFS-13547
> URL: https://issues.apache.org/jira/browse/HDFS-13547
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13547.001.patch, HDFS-13547.002.patch, 
> HDFS-13547.003.patch, HDFS-13547.004.patch
>
>
> This Jira extends the SASL properties resolver interface to take an ingress 
> port parameter, and also adds an implementation based on this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-147) Update Ozone site docs

2018-06-05 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1650#comment-1650
 ] 

genericqa commented on HDDS-147:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
37m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
27s{color} | {color:red} The patch generated 10 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-147 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926601/HDDS-147.05.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 784dd20115f8 4.4.0-121-generic #145-Ubuntu SMP Fri Apr 13 
13:47:23 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 920d154 |
| maven | version: Apache Maven 3.3.9 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDDS-Build/251/artifact/out/whitespace-eol.txt
 |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDDS-Build/251/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 409 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/docs U: hadoop-ozone/docs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/251/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update Ozone site docs
> --
>
> Key: HDDS-147
> URL: https://issues.apache.org/jira/browse/HDDS-147
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: document
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: documentation
> Attachments: HDDS-147.01.patch, HDDS-147.02.patch, HDDS-147.03.patch, 
> HDDS-147.04.patch, HDDS-147.05.patch
>
>
> Ozone site docs need a few updates to the command syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-129) Support for ReportManager in Datanode

2018-06-05 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502213#comment-16502213
 ] 

Hudson commented on HDDS-129:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14366 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14366/])
HDDS-129. Support for ReportManager in Datanode. Contributed by Nanda 
(aengineer: rev baebe4d52bc0e1ee3be062b61efa1de1d19a3bca)
* (add) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportManager.java
* (add) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisherFactory.java
* (add) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportPublisher.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/StateContext.java
* (add) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportManager.java
* (add) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/package-info.java
* (add) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/package-info.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
* (add) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ContainerReportPublisher.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/HeartbeatEndpointTask.java
* (add) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisher.java
* (add) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/NodeReportPublisher.java
* (add) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportPublisherFactory.java


> Support for ReportManager in Datanode
> -
>
> Key: HDDS-129
> URL: https://issues.apache.org/jira/browse/HDDS-129
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-129.000.patch
>
>
> As part of Datanode startup, we should initialize {{ReportManager}} which 
> will be responsible for updating heartbeat with reports from Datanode which 
> has to be sent to SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication

2018-06-05 Thread Sherwood Zheng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502194#comment-16502194
 ] 

Sherwood Zheng commented on HDFS-12284:
---

For [~elgoiri] 's first comment. I was doing the full contract test with secure 
env, in order to reuse the code for the contract classes, I have to create 
separate classes for each fs operation like I did in the patch. I believe doing 
what I did in the patch is probably better, cause If I simply start a secure 
cluster and do a few operations, I'll end up having lots of duplicate code and 
won't have good modularity because I won't reuse the code in contract classes 
and those contract test files. 

 

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284.000.patch, HDFS-12284.001.patch, 
> HDFS-12284.002.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-129) Support for ReportManager in Datanode

2018-06-05 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-129:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~nandakumar131] Thanks for the contribution. I have committed this to  trunk.

> Support for ReportManager in Datanode
> -
>
> Key: HDDS-129
> URL: https://issues.apache.org/jira/browse/HDDS-129
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-129.000.patch
>
>
> As part of Datanode startup, we should initialize {{ReportManager}} which 
> will be responsible for updating heartbeat with reports from Datanode which 
> has to be sent to SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12284) RBF: Support for Kerberos authentication

2018-06-05 Thread Sherwood Zheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sherwood Zheng updated HDFS-12284:
--
Attachment: HDFS-12284.002.patch

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284.000.patch, HDFS-12284.001.patch, 
> HDFS-12284.002.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-119) Skip Apache license header check for some ozone doc scripts

2018-06-05 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502188#comment-16502188
 ] 

genericqa commented on HDDS-119:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
54s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
57s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 21m 
27s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
60m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 28m 10s{color} 
| {color:red} root generated 200 new + 1299 unchanged - 0 fixed = 1499 total 
(was 1299) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-build-tools in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 29s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
42s{color} | {color:red} The patch generated 11 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.TestStorageContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-119 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926584/HDDS-119.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux 201a7089bad0 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 745f3a2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| compile | 
https://builds.apache.org/job/PreCommit-HDDS-Build/249/artifact/out/branch-compile-root.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDDS-Build/249/artifact/out/diff-compile-javac-root.txt
 |
| unit | 

[jira] [Commented] (HDDS-119) Skip Apache license header check for some ozone doc scripts

2018-06-05 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502166#comment-16502166
 ] 

genericqa commented on HDDS-119:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 32m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
73m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 31m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-build-tools in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 23s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
40s{color} | {color:red} The patch generated 11 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestStorageContainerManager |
|   | hadoop.ozone.TestOzoneConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-119 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926566/HDDS-119.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux c62e60f7e1ea 3.13.0-141-generic #190-Ubuntu SMP Fri Jan 19 
12:52:38 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 745f3a2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/248/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/248/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDDS-Build/248/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 981 (vs. ulimit 

[jira] [Commented] (HDDS-129) Support for ReportManager in Datanode

2018-06-05 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502157#comment-16502157
 ] 

Anu Engineer commented on HDDS-129:
---

+1. changes look excellent. I will commit this now.

 

> Support for ReportManager in Datanode
> -
>
> Key: HDDS-129
> URL: https://issues.apache.org/jira/browse/HDDS-129
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-129.000.patch
>
>
> As part of Datanode startup, we should initialize {{ReportManager}} which 
> will be responsible for updating heartbeat with reports from Datanode which 
> has to be sent to SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-147) Update Ozone site docs

2018-06-05 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502156#comment-16502156
 ] 

genericqa commented on HDDS-147:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
39m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
24s{color} | {color:red} The patch generated 10 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-147 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926590/HDDS-147.04.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 8ebf0e66be56 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 920d154 |
| maven | version: Apache Maven 3.3.9 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDDS-Build/250/artifact/out/whitespace-eol.txt
 |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDDS-Build/250/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 336 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/docs U: hadoop-ozone/docs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/250/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update Ozone site docs
> --
>
> Key: HDDS-147
> URL: https://issues.apache.org/jira/browse/HDDS-147
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: document
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: documentation
> Attachments: HDDS-147.01.patch, HDDS-147.02.patch, HDDS-147.03.patch, 
> HDDS-147.04.patch, HDDS-147.05.patch
>
>
> Ozone site docs need a few updates to the command syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-147) Update Ozone site docs

2018-06-05 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-147:
---
Attachment: HDDS-147.05.patch

> Update Ozone site docs
> --
>
> Key: HDDS-147
> URL: https://issues.apache.org/jira/browse/HDDS-147
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: document
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: documentation
> Attachments: HDDS-147.01.patch, HDDS-147.02.patch, HDDS-147.03.patch, 
> HDDS-147.04.patch, HDDS-147.05.patch
>
>
> Ozone site docs need a few updates to the command syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13657) INodeId's LAST_RESERVED_ID may not as expected and the comment is misleading

2018-06-05 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502125#comment-16502125
 ] 

genericqa commented on HDFS-13657:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 34m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 55s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}113m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}193m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
|   | hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs |
|   | hadoop.hdfs.server.namenode.TestFSImage |
|   | hadoop.hdfs.TestDatanodeLayoutUpgrade |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13657 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926570/HDFS-13657-trunk.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e342e31f63c6 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0e3c315 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (HDDS-147) Update Ozone site docs

2018-06-05 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502126#comment-16502126
 ] 

Arpit Agarwal commented on HDDS-147:


v5 patch fixes duplicate content (thanks [~nandakumar131] for pointing out 
offline).

> Update Ozone site docs
> --
>
> Key: HDDS-147
> URL: https://issues.apache.org/jira/browse/HDDS-147
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: document
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: documentation
> Attachments: HDDS-147.01.patch, HDDS-147.02.patch, HDDS-147.03.patch, 
> HDDS-147.04.patch, HDDS-147.05.patch
>
>
> Ozone site docs need a few updates to the command syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13653) Make dfs.client.failover.random.order a per nameservice configuration

2018-06-05 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502107#comment-16502107
 ] 

Íñigo Goiri commented on HDFS-13653:


For randomness, I have some code that checks if it follows the chi distribution 
but that's totally overdone.
I would just add a test with a couple namespaces and each of them with say 3 
namenodes.
Then one of them would have the random flag and not the other.
Then we can do a loop with say 20  client creations and then check if it's 
pointing to one NN or the other.
I would try to avoid using MiniDFSCluster.

> Make dfs.client.failover.random.order a per nameservice configuration
> -
>
> Key: HDFS-13653
> URL: https://issues.apache.org/jira/browse/HDFS-13653
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: Ekanth Sethuramalingam
>Assignee: Ekanth Sethuramalingam
>Priority: Major
> Attachments: HDFS-13653.001.patch, HDFS-13653.002.patch, 
> HDFS-13653.003.patch
>
>
> Currently the dfs.client.failover.random.order is applied globally. If we 
> have a combination of router and non-router nameservice, the random order 
> should ideally be enabled only for the router based nameservice. This Jira is 
> to make this configuration per-nameservice so that this can be configured 
> independently for each nameservice. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13121) NPE when request file descriptors when SC read

2018-06-05 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502099#comment-16502099
 ] 

Wei-Chiu Chuang commented on HDFS-13121:


Thanks for the patch [~zvenczel]!

Regarding the fix, can we consider throwing a less generic exception than 
IOException? I am thinking that if a more specific exception is thrown due to 
max fd issue, the caller (BlockReaderFactory#createShortCircuitReplicaInfo) 
might be able to handle this exception better. (for example, back off before 
retry)

Regarding the test:
{code}
Path testfile = new Path("/testfile");
FSDataOutputStream fout = fs.create(testfile);
fout.write(fileData);
fout.close();
{code}
Use DFSTestUtil#createFile() is preferred.

I'll review the rest of the test later.

> NPE when request file descriptors when SC read
> --
>
> Key: HDFS-13121
> URL: https://issues.apache.org/jira/browse/HDFS-13121
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Gang Xie
>Assignee: Zsolt Venczel
>Priority: Minor
> Attachments: HDFS-13121.01.patch, HDFS-13121.02.patch, 
> HDFS-13121.03.patch, test-only.patch
>
>
> Recently, we hit an issue that the DFSClient throws NPE. The case is that, 
> the app process exceeds the limit of the max open file. In the case, the 
> libhadoop never throw and exception but return null to the request of fds. 
> But requestFileDescriptors use the returned fds directly without any check 
> and then NPE. 
>  
> We need add a sanity check here of null pointer.
>  
> private ShortCircuitReplicaInfo requestFileDescriptors(DomainPeer peer,
>  Slot slot) throws IOException {
>  ShortCircuitCache cache = clientContext.getShortCircuitCache();
>  final DataOutputStream out =
>  new DataOutputStream(new BufferedOutputStream(peer.getOutputStream()));
>  SlotId slotId = slot == null ? null : slot.getSlotId();
>  new Sender(out).requestShortCircuitFds(block, token, slotId, 1,
>  failureInjector.getSupportsReceiptVerification());
>  DataInputStream in = new DataInputStream(peer.getInputStream());
>  BlockOpResponseProto resp = BlockOpResponseProto.parseFrom(
>  PBHelperClient.vintPrefixed(in));
>  DomainSocket sock = peer.getDomainSocket();
>  failureInjector.injectRequestFileDescriptorsFailure();
>  switch (resp.getStatus()) {
>  case SUCCESS:
>  byte buf[] = new byte[1];
>  FileInputStream[] fis = new FileInputStream[2];
>  {color:#d04437}sock.recvFileInputStreams(fis, buf, 0, buf.length);{color}
>  ShortCircuitReplica replica = null;
>  try {
>  ExtendedBlockId key =
>  new ExtendedBlockId(block.getBlockId(), block.getBlockPoolId());
>  if (buf[0] == USE_RECEIPT_VERIFICATION.getNumber()) {
>  LOG.trace("Sending receipt verification byte for slot {}", slot);
>  sock.getOutputStream().write(0);
>  }
>  {color:#d04437}replica = new ShortCircuitReplica(key, fis[0], fis[1], 
> cache,{color}
> {color:#d04437} Time.monotonicNow(), slot);{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-107) Ozone: TestOzoneConfigurationFields is failing

2018-06-05 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502070#comment-16502070
 ] 

Mukul Kumar Singh commented on HDDS-107:


Thanks for the reviews [~GeLiXin], [~ajayydv] and [~nandakumar131]. I will 
commit this shortly.

> Ozone: TestOzoneConfigurationFields is failing
> --
>
> Key: HDDS-107
> URL: https://issues.apache.org/jira/browse/HDDS-107
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-107.001.patch, HDDS-107.002.patch, 
> HDFS-13449-HDFS-7240.000.patch
>
>
> {{TestOzoneConfigurationFields}} is failing because of two properties 
> introduced in ozone-default.xml by HDFS-13197
>  * hadoop.tags.system
>  * ozone.tags.custom
> Which are not present in any ConfigurationClasses.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-154) Ozone documentation updates

2018-06-05 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDDS-154:
--

Assignee: Arpit Agarwal

> Ozone documentation updates
> ---
>
> Key: HDDS-154
> URL: https://issues.apache.org/jira/browse/HDDS-154
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>
> Follow up updates to Ozone documentation from HDDS-147:
> # Describe how to start Ozone module separately without HDFS datanodes (see 
> HDDS-94)
> # Update command docs to describe the different formats in which the service 
> address can be specified on the command line.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13607) [Edit Tail Fast Path Pt 1] Enhance JournalNode with an in-memory cache of recent edit transactions

2018-06-05 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502063#comment-16502063
 ] 

Chen Liang commented on HDFS-13607:
---

v002 patch looks pretty good overall, just one comment. In 
JournaledEditsCache#storeEdits, looks like we add the input to the buffer 
first, then check if it is larger then capacity, if yes, remove the exceeding 
part? Seems to me that this mean dataMap can effectively go larger than 
capacity. To avoid being confused by the log warn message in the future, can we 
swap the order? i.e. remove first if needed, then add, something like:
{code}
while (totalSize + input.length > capacity) {
// call dataMap.remove() and update
}
dataMap.put(...);
totalSize += input.length;
{code}

> [Edit Tail Fast Path Pt 1] Enhance JournalNode with an in-memory cache of 
> recent edit transactions
> --
>
> Key: HDFS-13607
> URL: https://issues.apache.org/jira/browse/HDFS-13607
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, journal-node
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13607-HDFS-12943.000.patch, 
> HDFS-13607-HDFS-12943.001.patch, HDFS-13607-HDFS-12943.002.patch
>
>
> See HDFS-13150 for full design.
> This JIRA is to add the in-memory cache of recent edit transactions on the 
> JournalNode. This JIRA does not include accesses to this cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-147) Update Ozone site docs

2018-06-05 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502050#comment-16502050
 ] 

Arpit Agarwal edited comment on HDDS-147 at 6/5/18 4:28 PM:


Thanks for the review [~nandakumar131]!
{quote}For this we need HDDS-94, this will introduce the option to start 
{{hdds-datanode}} (without hdfs modules)
{quote}
v4 patch incorporates your comments. I will defer this until HDDS-94 is done. 
Filed HDDS-154.

 
{quote}The {{oz}} command syntax that is given in the doc doesn't mention 
anything about the port. Should we explain different formats in which we can 
run a command? Follow up jira can be created to add it.
{quote}
Also deferring this to HDDS-154.


was (Author: arpitagarwal):
Thanks for the review [~nandakumar131]!
{quote}For this we need HDDS-94, this will introduce the option to start 
{{hdds-datanode}} (without hdfs modules)
{quote}
I've updated the patch with your comments. I will defer this until HDDS-94 is 
done. Filed HDDS-154.

 
{quote}The {{oz}} command syntax that is given in the doc doesn't mention 
anything about the port. Should we explain different formats in which we can 
run a command? Follow up jira can be created to add it.
{quote}
Also deferring this to HDDS-154.

> Update Ozone site docs
> --
>
> Key: HDDS-147
> URL: https://issues.apache.org/jira/browse/HDDS-147
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: document
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: documentation
> Attachments: HDDS-147.01.patch, HDDS-147.02.patch, HDDS-147.03.patch, 
> HDDS-147.04.patch
>
>
> Ozone site docs need a few updates to the command syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-147) Update Ozone site docs

2018-06-05 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502050#comment-16502050
 ] 

Arpit Agarwal edited comment on HDDS-147 at 6/5/18 4:28 PM:


Thanks for the review [~nandakumar131]! v4 patch incorporates your comments. 
{quote}For this we need HDDS-94, this will introduce the option to start 
{{hdds-datanode}} (without hdfs modules)
{quote}
I will defer this until HDDS-94 is done. Filed HDDS-154.

 
{quote}The {{oz}} command syntax that is given in the doc doesn't mention 
anything about the port. Should we explain different formats in which we can 
run a command? Follow up jira can be created to add it.
{quote}
Also deferring this to HDDS-154.


was (Author: arpitagarwal):
Thanks for the review [~nandakumar131]!
{quote}For this we need HDDS-94, this will introduce the option to start 
{{hdds-datanode}} (without hdfs modules)
{quote}
v4 patch incorporates your comments. I will defer this until HDDS-94 is done. 
Filed HDDS-154.

 
{quote}The {{oz}} command syntax that is given in the doc doesn't mention 
anything about the port. Should we explain different formats in which we can 
run a command? Follow up jira can be created to add it.
{quote}
Also deferring this to HDDS-154.

> Update Ozone site docs
> --
>
> Key: HDDS-147
> URL: https://issues.apache.org/jira/browse/HDDS-147
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: document
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: documentation
> Attachments: HDDS-147.01.patch, HDDS-147.02.patch, HDDS-147.03.patch, 
> HDDS-147.04.patch
>
>
> Ozone site docs need a few updates to the command syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-147) Update Ozone site docs

2018-06-05 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502050#comment-16502050
 ] 

Arpit Agarwal commented on HDDS-147:


Thanks for the review [~nandakumar131]!
{quote}For this we need HDDS-94, this will introduce the option to start 
{{hdds-datanode}} (without hdfs modules)
{quote}
I've updated the patch with your comments. I will defer this until HDDS-94 is 
done. Filed HDDS-154.

 
{quote}The {{oz}} command syntax that is given in the doc doesn't mention 
anything about the port. Should we explain different formats in which we can 
run a command? Follow up jira can be created to add it.
{quote}
Also deferring this to HDDS-154.

> Update Ozone site docs
> --
>
> Key: HDDS-147
> URL: https://issues.apache.org/jira/browse/HDDS-147
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: document
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: documentation
> Attachments: HDDS-147.01.patch, HDDS-147.02.patch, HDDS-147.03.patch, 
> HDDS-147.04.patch
>
>
> Ozone site docs need a few updates to the command syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-154) Ozone documentation updates

2018-06-05 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-154:
--

 Summary: Ozone documentation updates
 Key: HDDS-154
 URL: https://issues.apache.org/jira/browse/HDDS-154
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Arpit Agarwal


Follow up updates to Ozone documentation from HDDS-147:
# Describe how to start Ozone module separately without HDFS datanodes (see 
HDDS-94)
# Update command docs to describe the different formats in which the service 
address can be specified on the command line.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-147) Update Ozone site docs

2018-06-05 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-147:
---
Attachment: HDDS-147.04.patch

> Update Ozone site docs
> --
>
> Key: HDDS-147
> URL: https://issues.apache.org/jira/browse/HDDS-147
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: document
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: documentation
> Attachments: HDDS-147.01.patch, HDDS-147.02.patch, HDDS-147.03.patch, 
> HDDS-147.04.patch
>
>
> Ozone site docs need a few updates to the command syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13641) Add metrics for edit log tailing

2018-06-05 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502025#comment-16502025
 ] 

Chao Sun commented on HDFS-13641:
-

bq. As I see we have added some new metrics, can we update the unit test as well

Thanks [~linyiqun] for taking another look. I thought about this but couldn't 
find any util to test the {{MutableRate}} type metrics (for instance, 
{{MetricsAsserts}} doesn't have anything related to it). It seems also that the 
existing {{MutableRate}} metrics are not tested. Do you have any suggestion?

> Add metrics for edit log tailing 
> -
>
> Key: HDFS-13641
> URL: https://issues.apache.org/jira/browse/HDFS-13641
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13641-HDFS-12943.000.patch, HDFS-13641.000.patch, 
> HDFS-13641.001.patch, HDFS-13641.002.patch
>
>
> We should add metrics for each iteration of edit log tailing, including 1) # 
> of edits loaded, 2) time spent in select input edit stream, 3) time spent in 
> loading the edits, 4) interval between the iterations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-148) Remove ContainerReportManager and ContainerReportManagerImpl

2018-06-05 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502004#comment-16502004
 ] 

Hudson commented on HDDS-148:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14365 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14365/])
HDDS-148. Remove ContainerReportManager and ContainerReportManagerImpl. (xyao: 
rev 920d154997f0ad6000d8f76029d6d415e7b8980c)
* (delete) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerReportManagerImpl.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerManagerImpl.java
* (delete) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/ContainerReportManager.java


> Remove ContainerReportManager and ContainerReportManagerImpl
> 
>
> Key: HDDS-148
> URL: https://issues.apache.org/jira/browse/HDDS-148
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-148.000.patch
>
>
> {{ContainerReportManager}} and {{ContainerReportManagerImpl}} are not used 
> anywhere, these classes can be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-119) Skip Apache license header check for some ozone doc scripts

2018-06-05 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501975#comment-16501975
 ] 

Elek, Marton commented on HDDS-119:
---

Thanks [~ajayydv] a lot to fix it. I am trying to test it but I still have the 
warnings. Waiting for jenkins as I may do something wrong.

Do you have any reason to add the exclusions to the  hadoop-ozone/pom.xml 
instead of hadoop-ozone/docs/pom.xml?

> Skip Apache license header check for some ozone doc scripts
> ---
>
> Key: HDDS-119
> URL: https://issues.apache.org/jira/browse/HDDS-119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document
> Environment: {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-119.00.patch, HDDS-119.01.patch
>
>
> {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-153) Add HA-aware proxy for OM client

2018-06-05 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-153:
---

Assignee: DENG FEI

> Add HA-aware proxy for OM client 
> -
>
> Key: HDDS-153
> URL: https://issues.apache.org/jira/browse/HDDS-153
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: DENG FEI
>Priority: Major
>
> This allows the client to talk to OMs in RATIS ring when failover (leader 
> change) happens. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-153) Add HA-aware proxy for OM client

2018-06-05 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-153:
---

 Summary: Add HA-aware proxy for OM client 
 Key: HDDS-153
 URL: https://issues.apache.org/jira/browse/HDDS-153
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao


This allows the client to talk to OMs in RATIS ring when failover (leader 
change) happens. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-151) Add HA support for Ozone

2018-06-05 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-151:
---

Assignee: Xiaoyu Yao  (was: DENG FEI)

> Add HA support for Ozone
> 
>
> Key: HDDS-151
> URL: https://issues.apache.org/jira/browse/HDDS-151
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> This includes HA for OM and SCM and their clients.  For OM and SCM, our 
> initial proposal is to use RATIS to ensure consistent/reliable replication of 
> metadata. We will post a design doc and create a separate branch for the 
> feature development.
> cc: [~anu], [~jnpandey], [~szetszwo], [~msingh], [~hellodengfei]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-152) Support HA for Ozone Manager

2018-06-05 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-152:
---

Assignee: DENG FEI

> Support HA for Ozone Manager
> 
>
> Key: HDDS-152
> URL: https://issues.apache.org/jira/browse/HDDS-152
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: DENG FEI
>Priority: Major
>
> Ozone Manager(OM) provide the name services on top of HDDS(SCM). This ticket 
> is opened to add HA support for OM. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-152) Support HA for Ozone Manager

2018-06-05 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-152:
---

 Summary: Support HA for Ozone Manager
 Key: HDDS-152
 URL: https://issues.apache.org/jira/browse/HDDS-152
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao


Ozone Manager(OM) provide the name services on top of HDDS(SCM). This ticket is 
opened to add HA support for OM. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-148) Remove ContainerReportManager and ContainerReportManagerImpl

2018-06-05 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-148:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~nandakumar131] for the contribution. I've commit the patch to trunk.

> Remove ContainerReportManager and ContainerReportManagerImpl
> 
>
> Key: HDDS-148
> URL: https://issues.apache.org/jira/browse/HDDS-148
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-148.000.patch
>
>
> {{ContainerReportManager}} and {{ContainerReportManagerImpl}} are not used 
> anywhere, these classes can be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-151) Add HA support for Ozone

2018-06-05 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-151:

Description: 
This includes HA for OM and SCM and their clients.  For OM and SCM, our initial 
proposal is to use RATIS to ensure consistent/reliable replication of metadata. 
We will post a design doc and create a separate branch for the feature 
development.

cc: [~anu], [~jnpandey], [~szetszwo], [~msingh], [~hellodengfei]

  was:
This includes HA for OM and SCM and their clients.  For OM and SCM, our initial 
proposal is to use RATIS to ensure consistent/reliable replication of metadata. 
We will post a design doc and create a separate branch for the feature 
development.

cc: [~anu], [~jnpandey], [~msingh], [~hellodengfei]


> Add HA support for Ozone
> 
>
> Key: HDDS-151
> URL: https://issues.apache.org/jira/browse/HDDS-151
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Xiaoyu Yao
>Assignee: DENG FEI
>Priority: Major
>
> This includes HA for OM and SCM and their clients.  For OM and SCM, our 
> initial proposal is to use RATIS to ensure consistent/reliable replication of 
> metadata. We will post a design doc and create a separate branch for the 
> feature development.
> cc: [~anu], [~jnpandey], [~szetszwo], [~msingh], [~hellodengfei]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-151) Add HA support for Ozone

2018-06-05 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-151:

Description: 
This includes HA for OM and SCM and their clients.  For OM and SCM, our initial 
proposal is to use RATIS to ensure consistent/reliable replication of metadata. 
We will post a design doc and create a separate branch for the feature 
development.

cc: [~anu], [~jnpandey], [~msingh], [~hellodengfei]

  was:
This includes HA for OM and SCM and their clients.  For OM and SCM, our initial 
proposal is to use RATIS to ensure consistent/reliable replication of metadata. 
We will post a design doc and create a separate branch for the feature 
development.

cc: [~anu], [~jnpandey], [~msingh]


> Add HA support for Ozone
> 
>
> Key: HDDS-151
> URL: https://issues.apache.org/jira/browse/HDDS-151
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Xiaoyu Yao
>Assignee: DENG FEI
>Priority: Major
>
> This includes HA for OM and SCM and their clients.  For OM and SCM, our 
> initial proposal is to use RATIS to ensure consistent/reliable replication of 
> metadata. We will post a design doc and create a separate branch for the 
> feature development.
> cc: [~anu], [~jnpandey], [~msingh], [~hellodengfei]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-119) Skip Apache license header check for some ozone doc scripts

2018-06-05 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501952#comment-16501952
 ] 

Ajay Kumar commented on HDDS-119:
-

[~xyao] thanks for review, patch v1 to remove one redundant line in last patch. 
[~elek], it is minor one, plz review if possible.



> Skip Apache license header check for some ozone doc scripts
> ---
>
> Key: HDDS-119
> URL: https://issues.apache.org/jira/browse/HDDS-119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document
> Environment: {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-119.00.patch, HDDS-119.01.patch
>
>
> {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-119) Skip Apache license header check for some ozone doc scripts

2018-06-05 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-119:

Attachment: HDDS-119.01.patch

> Skip Apache license header check for some ozone doc scripts
> ---
>
> Key: HDDS-119
> URL: https://issues.apache.org/jira/browse/HDDS-119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document
> Environment: {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-119.00.patch, HDDS-119.01.patch
>
>
> {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-119) Skip Apache license header check for some ozone doc scripts

2018-06-05 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-119:

Attachment: (was: HDDS-119.01.patch)

> Skip Apache license header check for some ozone doc scripts
> ---
>
> Key: HDDS-119
> URL: https://issues.apache.org/jira/browse/HDDS-119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document
> Environment: {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-119.00.patch, HDDS-119.01.patch
>
>
> {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-151) Add HA support for Ozone

2018-06-05 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-151:
---

Assignee: DENG FEI

> Add HA support for Ozone
> 
>
> Key: HDDS-151
> URL: https://issues.apache.org/jira/browse/HDDS-151
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Xiaoyu Yao
>Assignee: DENG FEI
>Priority: Major
>
> This includes HA for OM and SCM and their clients.  For OM and SCM, our 
> initial proposal is to use RATIS to ensure consistent/reliable replication of 
> metadata. We will post a design doc and create a separate branch for the 
> feature development.
> cc: [~anu], [~jnpandey], [~msingh]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-119) Skip Apache license header check for some ozone doc scripts

2018-06-05 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-119:

Attachment: HDDS-119.01.patch

> Skip Apache license header check for some ozone doc scripts
> ---
>
> Key: HDDS-119
> URL: https://issues.apache.org/jira/browse/HDDS-119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document
> Environment: {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-119.00.patch, HDDS-119.01.patch
>
>
> {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13635) Incorrect message when block is not found

2018-06-05 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501948#comment-16501948
 ] 

Gabor Bota commented on HDFS-13635:
---

In my patch I've renamed the constant NON_EXISTENT_REPLICA to 
NON_EXISTENT_REPLICA_APPEND because this describes the content better, and 
added NON_EXISTENT_REPLICA with the content of "Replica does not exist " to 
reflect that the replica just does not exist in that case, we don't want to 
append to it.

> Incorrect message when block is not found
> -
>
> Key: HDFS-13635
> URL: https://issues.apache.org/jira/browse/HDFS-13635
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HDFS-13635.001.patch
>
>
> When client opens a file, it asks DataNode to check the blocks' visible 
> length. If somehow the block is not on the DN, it throws "Cannot append to a 
> non-existent replica" message, which is incorrect, because 
> getReplicaVisibleLength() is called for different use, just not for appending 
> to a block. It should just state "block is not found"
> The following stacktrace comes from a CDH5.13, but it looks like the same 
> warning exists in Apache Hadoop trunk.
> {noformat}
> 2018-05-29 09:23:41,966 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 2 on 50020, call 
> org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol.getReplicaVisibleLength
>  from 10.0.0.14:53217 Call#38334117 Retry#0
> org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Cannot 
> append to a non-existent replica 
> BP-725378529-10.236.236.8-1410027444173:13276792346
>  at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getReplicaInfo(FsDatasetImpl.java:792)
>  at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getReplicaVisibleLength(FsDatasetImpl.java:2588)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getReplicaVisibleLength(DataNode.java:2756)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getReplicaVisibleLength(ClientDatanodeProtocolServerSideTranslatorPB.java:107)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17873)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13635) Incorrect message when block is not found

2018-06-05 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HDFS-13635:
--
Status: Patch Available  (was: Open)

> Incorrect message when block is not found
> -
>
> Key: HDFS-13635
> URL: https://issues.apache.org/jira/browse/HDFS-13635
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HDFS-13635.001.patch
>
>
> When client opens a file, it asks DataNode to check the blocks' visible 
> length. If somehow the block is not on the DN, it throws "Cannot append to a 
> non-existent replica" message, which is incorrect, because 
> getReplicaVisibleLength() is called for different use, just not for appending 
> to a block. It should just state "block is not found"
> The following stacktrace comes from a CDH5.13, but it looks like the same 
> warning exists in Apache Hadoop trunk.
> {noformat}
> 2018-05-29 09:23:41,966 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 2 on 50020, call 
> org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol.getReplicaVisibleLength
>  from 10.0.0.14:53217 Call#38334117 Retry#0
> org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Cannot 
> append to a non-existent replica 
> BP-725378529-10.236.236.8-1410027444173:13276792346
>  at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getReplicaInfo(FsDatasetImpl.java:792)
>  at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getReplicaVisibleLength(FsDatasetImpl.java:2588)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getReplicaVisibleLength(DataNode.java:2756)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getReplicaVisibleLength(ClientDatanodeProtocolServerSideTranslatorPB.java:107)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17873)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-151) Add HA support for Ozone

2018-06-05 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-151:
---

 Summary: Add HA support for Ozone
 Key: HDDS-151
 URL: https://issues.apache.org/jira/browse/HDDS-151
 Project: Hadoop Distributed Data Store
  Issue Type: New Feature
Reporter: Xiaoyu Yao


This includes HA for OM and SCM and their clients.  For OM and SCM, our initial 
proposal is to use RATIS to ensure consistent/reliable replication of metadata. 
We will post a design doc and create a separate branch for the feature 
development.

cc: [~anu], [~jnpandey], [~msingh]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13635) Incorrect message when block is not found

2018-06-05 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HDFS-13635:
--
Attachment: HDFS-13635.001.patch

> Incorrect message when block is not found
> -
>
> Key: HDFS-13635
> URL: https://issues.apache.org/jira/browse/HDFS-13635
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HDFS-13635.001.patch
>
>
> When client opens a file, it asks DataNode to check the blocks' visible 
> length. If somehow the block is not on the DN, it throws "Cannot append to a 
> non-existent replica" message, which is incorrect, because 
> getReplicaVisibleLength() is called for different use, just not for appending 
> to a block. It should just state "block is not found"
> The following stacktrace comes from a CDH5.13, but it looks like the same 
> warning exists in Apache Hadoop trunk.
> {noformat}
> 2018-05-29 09:23:41,966 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 2 on 50020, call 
> org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol.getReplicaVisibleLength
>  from 10.0.0.14:53217 Call#38334117 Retry#0
> org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Cannot 
> append to a non-existent replica 
> BP-725378529-10.236.236.8-1410027444173:13276792346
>  at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getReplicaInfo(FsDatasetImpl.java:792)
>  at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getReplicaVisibleLength(FsDatasetImpl.java:2588)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getReplicaVisibleLength(DataNode.java:2756)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getReplicaVisibleLength(ClientDatanodeProtocolServerSideTranslatorPB.java:107)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17873)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-148) Remove ContainerReportManager and ContainerReportManagerImpl

2018-06-05 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501938#comment-16501938
 ] 

Xiaoyu Yao commented on HDDS-148:
-

Thanks [~nandakumar131] for reporting the issue and posting the patch. The 
patch LGTM, +1. I will commit it shortly.

> Remove ContainerReportManager and ContainerReportManagerImpl
> 
>
> Key: HDDS-148
> URL: https://issues.apache.org/jira/browse/HDDS-148
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-148.000.patch
>
>
> {{ContainerReportManager}} and {{ContainerReportManagerImpl}} are not used 
> anywhere, these classes can be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-147) Update Ozone site docs

2018-06-05 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501937#comment-16501937
 ] 

Xiaoyu Yao commented on HDDS-147:
-

{quote}{quote}I don't know how to run Ozone separately without the HDFS 
DataNode. Could you please clarify that?
{quote}
For this we need HDDS-94, this will introduce the option to start 
{{hdds-datanode}} (without hdfs modules)
{quote}
Thanks [~nandakumar131] for the pointers, let's leave this as-is until HDDS-94 
is implemented. 

> Update Ozone site docs
> --
>
> Key: HDDS-147
> URL: https://issues.apache.org/jira/browse/HDDS-147
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: document
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: documentation
> Attachments: HDDS-147.01.patch, HDDS-147.02.patch, HDDS-147.03.patch
>
>
> Ozone site docs need a few updates to the command syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-119) Skip Apache license header check for some ozone doc scripts

2018-06-05 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-119:

Fix Version/s: 0.2.1

> Skip Apache license header check for some ozone doc scripts
> ---
>
> Key: HDDS-119
> URL: https://issues.apache.org/jira/browse/HDDS-119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document
> Environment: {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-119.00.patch
>
>
> {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-119) Skip Apache license header check for some ozone doc scripts

2018-06-05 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501930#comment-16501930
 ] 

Xiaoyu Yao commented on HDDS-119:
-

Thanks [~ajayydv] for fixing this. The patch LGTM, +1 pending Jenkins.

> Skip Apache license header check for some ozone doc scripts
> ---
>
> Key: HDDS-119
> URL: https://issues.apache.org/jira/browse/HDDS-119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document
> Environment: {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-119.00.patch
>
>
> {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-119) Skip Apache license header check for some ozone doc scripts

2018-06-05 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-119:

Status: Patch Available  (was: Open)

> Skip Apache license header check for some ozone doc scripts
> ---
>
> Key: HDDS-119
> URL: https://issues.apache.org/jira/browse/HDDS-119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document
> Environment: {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-119.00.patch
>
>
> {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-107) Ozone: TestOzoneConfigurationFields is failing

2018-06-05 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501839#comment-16501839
 ] 

LiXin Ge commented on HDDS-107:
---

Thanks [~msingh],[~nandakumar131] and [~ajayydv] for making this move on. +1 
non binding.
Sorry for the late response here, as I'm not allowed to contribute in the 
opensource community these weeks for some unexpected reasons, temporary. I hope 
I can come back soon…

> Ozone: TestOzoneConfigurationFields is failing
> --
>
> Key: HDDS-107
> URL: https://issues.apache.org/jira/browse/HDDS-107
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-107.001.patch, HDDS-107.002.patch, 
> HDFS-13449-HDFS-7240.000.patch
>
>
> {{TestOzoneConfigurationFields}} is failing because of two properties 
> introduced in ozone-default.xml by HDFS-13197
>  * hadoop.tags.system
>  * ozone.tags.custom
> Which are not present in any ConfigurationClasses.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13657) INodeId's LAST_RESERVED_ID may not as expected and the comment is misleading

2018-06-05 Thread Wang XL (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wang XL updated HDFS-13657:
---
Attachment: HDFS-13657-trunk.001.patch
Status: Patch Available  (was: Open)

> INodeId's LAST_RESERVED_ID may not as expected and the comment is misleading 
> -
>
> Key: HDFS-13657
> URL: https://issues.apache.org/jira/browse/HDFS-13657
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.0, 2.7.7
>Reporter: Wang XL
>Priority: Trivial
> Attachments: HDFS-13657-trunk.001.patch
>
>
> The comment of class INodeId is misleading. In the comment, Id 1 to 1000 are 
> reserved for potential future usage, but code\{{public static final long 
> LAST_RESERVED_ID = 2 << 14 - 1}} will result 1 to 16384 are reserved. At the 
> same time , operator '-' priority is higher than '<<', \{{2 << 14 - 1}} is 
> not equal to \{{(2 <<14) - 1}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13657) INodeId's LAST_RESERVED_ID may not as expected and the comment is misleading

2018-06-05 Thread Wang XL (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wang XL updated HDFS-13657:
---
Affects Version/s: 2.7.7
   3.1.0

> INodeId's LAST_RESERVED_ID may not as expected and the comment is misleading 
> -
>
> Key: HDFS-13657
> URL: https://issues.apache.org/jira/browse/HDFS-13657
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.0, 2.7.7
>Reporter: Wang XL
>Priority: Trivial
>
> The comment of class INodeId is misleading. In the comment, Id 1 to 1000 are 
> reserved for potential future usage, but code\{{public static final long 
> LAST_RESERVED_ID = 2 << 14 - 1}} will result 1 to 16384 are reserved. At the 
> same time , operator '-' priority is higher than '<<', \{{2 << 14 - 1}} is 
> not equal to \{{(2 <<14) - 1}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13657) INodeId's LAST_RESERVED_ID may not as expected and the comment is misleading

2018-06-05 Thread Wang XL (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wang XL updated HDFS-13657:
---
Priority: Trivial  (was: Major)

> INodeId's LAST_RESERVED_ID may not as expected and the comment is misleading 
> -
>
> Key: HDFS-13657
> URL: https://issues.apache.org/jira/browse/HDFS-13657
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Wang XL
>Priority: Trivial
>
> The comment of class INodeId is misleading. In the comment, Id 1 to 1000 are 
> reserved for potential future usage, but code\{{public static final long 
> LAST_RESERVED_ID = 2 << 14 - 1}} will result 1 to 16384 are reserved. At the 
> same time , operator '-' priority is higher than '<<', \{{2 << 14 - 1}} is 
> not equal to \{{(2 <<14) - 1}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13657) INodeId's LAST_RESERVED_ID may not as expected and the comment is misleading

2018-06-05 Thread Wang XL (JIRA)
Wang XL created HDFS-13657:
--

 Summary: INodeId's LAST_RESERVED_ID may not as expected and the 
comment is misleading 
 Key: HDFS-13657
 URL: https://issues.apache.org/jira/browse/HDFS-13657
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Wang XL


The comment of class INodeId is misleading. In the comment, Id 1 to 1000 are 
reserved for potential future usage, but code\{{public static final long 
LAST_RESERVED_ID = 2 << 14 - 1}} will result 1 to 16384 are reserved. At the 
same time , operator '-' priority is higher than '<<', \{{2 << 14 - 1}} is not 
equal to \{{(2 <<14) - 1}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-107) Ozone: TestOzoneConfigurationFields is failing

2018-06-05 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501770#comment-16501770
 ] 

Ajay Kumar commented on HDDS-107:
-

+1

> Ozone: TestOzoneConfigurationFields is failing
> --
>
> Key: HDDS-107
> URL: https://issues.apache.org/jira/browse/HDDS-107
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-107.001.patch, HDDS-107.002.patch, 
> HDFS-13449-HDFS-7240.000.patch
>
>
> {{TestOzoneConfigurationFields}} is failing because of two properties 
> introduced in ozone-default.xml by HDFS-13197
>  * hadoop.tags.system
>  * ozone.tags.custom
> Which are not present in any ConfigurationClasses.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-119) Skip Apache license header check for some ozone doc scripts

2018-06-05 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-119:

Attachment: HDDS-119.00.patch

> Skip Apache license header check for some ozone doc scripts
> ---
>
> Key: HDDS-119
> URL: https://issues.apache.org/jira/browse/HDDS-119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document
> Environment: {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-119.00.patch
>
>
> {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9513) DataNodeManager#getDataNodeStorageInfos not backward compatibility

2018-06-05 Thread DENG FEI (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DENG FEI resolved HDFS-9513.

Resolution: Workaround

> DataNodeManager#getDataNodeStorageInfos not backward compatibility
> --
>
> Key: HDFS-9513
> URL: https://issues.apache.org/jira/browse/HDFS-9513
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, namenode
>Affects Versions: 2.2.0, 2.7.1
> Environment:  2.2.0 HDFS Client &2.7.1 HDFS Cluster
>Reporter: DENG FEI
>Assignee: DENG FEI
>Priority: Blocker
> Attachments: HDFS-9513-20160621.patch, patch.HDFS-9513.20151207, 
> patch.HDFS-9513.20151216-2.7.2
>
>
> We is upgraded our new HDFS cluster to 2.7.1,but we YARN cluster is 
> 2.2.0(8000+,it's too hard to upgrade as soon as HDFS cluster).
> The compatible case happened  datasteamer do pipeline recovery, the NN need 
> DN's storageInfo to update pipeline, and the storageIds is pair of 
> pipleline's DN,but HDFS support storage type feature from 2.3.0 
> [HDFS-2832|https://issues.apache.org/jira/browse/HDFS-2832], older version 
> not have storageId ,although the protobuf serialization make the protocol 
> compatible,but the client  will throw remote exception as 
> ArrayIndexOutOfBoundsException.
> 
> the exception stack is below:
> {noformat}
> 2015-12-05 20:26:38,291 ERROR [Thread-4] org.apache.hadoop.hdfs.DFSClient: 
> Failed to close file XXX
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  0
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:513)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipelineInternal(FSNamesystem.java:6439)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:6404)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updatePipeline(NameNodeRpcServer.java:892)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updatePipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:997)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1066)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>   at com.sun.proxy.$Proxy10.updatePipeline(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updatePipeline(ClientNamenodeProtocolTranslatorPB.java:801)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at com.sun.proxy.$Proxy11.updatePipeline(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1047)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >