[jira] [Commented] (HDDS-187) Command status publisher for datanode

2018-07-03 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532279#comment-16532279
 ] 

genericqa commented on HDDS-187:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-hdds_container-service generated 2 new + 4 
unchanged - 0 fixed = 6 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 59s{color} 
| {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
9s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.audit.TestOzoneAuditLogger |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-187 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930230/HDDS-187.06.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  

[jira] [Commented] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks

2018-07-03 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532227#comment-16532227
 ] 

genericqa commented on HDFS-13310:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
2s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12090 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
16s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
10s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
2s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} HDFS-12090 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 53s{color} | {color:orange} hadoop-hdfs-project: The patch generated 56 new 
+ 692 unchanged - 1 fixed = 748 total (was 693) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 38s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
39s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 2 new 
+ 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
28s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}129m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}231m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  
org.apache.hadoop.hdfs.server.protocol.SyncTaskExecutionResult.getResult() may 
expose internal representation by returning SyncTaskExecutionResult.result  At 
SyncTaskExecutionResult.java:by returning SyncTaskExecutionResult.result  At 
SyncTaskExecutionResult.java:[line 34] |
|  |  new 
org.apache.hadoop.hdfs.server.protocol.SyncTaskExecutionResult(byte[], Long) 
may expose internal representation by storing an externally mutable object into 
SyncTaskExecutionResult.result  At 

[jira] [Updated] (HDDS-187) Command status publisher for datanode

2018-07-03 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-187:

Attachment: HDDS-187.06.patch

> Command status publisher for datanode
> -
>
> Key: HDDS-187
> URL: https://issues.apache.org/jira/browse/HDDS-187
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-187.00.patch, HDDS-187.01.patch, HDDS-187.02.patch, 
> HDDS-187.03.patch, HDDS-187.04.patch, HDDS-187.05.patch, HDDS-187.06.patch
>
>
> Currently SCM sends set of commands for DataNode. DataNode executes them via 
> CommandHandler. This jira intends to create a Command status publisher which 
> will return status of these commands back to the SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-187) Command status publisher for datanode

2018-07-03 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1653#comment-1653
 ] 

Ajay Kumar commented on HDDS-187:
-

patch v6 to fix findbug.

> Command status publisher for datanode
> -
>
> Key: HDDS-187
> URL: https://issues.apache.org/jira/browse/HDDS-187
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-187.00.patch, HDDS-187.01.patch, HDDS-187.02.patch, 
> HDDS-187.03.patch, HDDS-187.04.patch, HDDS-187.05.patch, HDDS-187.06.patch
>
>
> Currently SCM sends set of commands for DataNode. DataNode executes them via 
> CommandHandler. This jira intends to create a Command status publisher which 
> will return status of these commands back to the SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-187) Command status publisher for datanode

2018-07-03 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-187:

Attachment: (was: HDDS-175.06.patch)

> Command status publisher for datanode
> -
>
> Key: HDDS-187
> URL: https://issues.apache.org/jira/browse/HDDS-187
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-187.00.patch, HDDS-187.01.patch, HDDS-187.02.patch, 
> HDDS-187.03.patch, HDDS-187.04.patch, HDDS-187.05.patch
>
>
> Currently SCM sends set of commands for DataNode. DataNode executes them via 
> CommandHandler. This jira intends to create a Command status publisher which 
> will return status of these commands back to the SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-187) Command status publisher for datanode

2018-07-03 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-187:

Attachment: HDDS-175.06.patch

> Command status publisher for datanode
> -
>
> Key: HDDS-187
> URL: https://issues.apache.org/jira/browse/HDDS-187
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-187.00.patch, HDDS-187.01.patch, HDDS-187.02.patch, 
> HDDS-187.03.patch, HDDS-187.04.patch, HDDS-187.05.patch
>
>
> Currently SCM sends set of commands for DataNode. DataNode executes them via 
> CommandHandler. This jira intends to create a Command status publisher which 
> will return status of these commands back to the SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13528) RBF: If a directory exceeds quota limit then quota usage is not refreshed for other mount entries

2018-07-03 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532210#comment-16532210
 ] 

genericqa commented on HDFS-13528:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
23s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13528 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930217/HDFS-13528.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5aaf679db08b 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7ca4f0c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24551/testReport/ |
| Max. process+thread count | 957 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24551/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: If a directory exceeds quota limit then quota usage is not refreshed for 
> other mount entries 
> 

[jira] [Comment Edited] (HDFS-13528) RBF: If a directory exceeds quota limit then quota usage is not refreshed for other mount entries

2018-07-03 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532166#comment-16532166
 ] 

Yiqun Lin edited comment on HDFS-13528 at 7/4/18 2:32 AM:
--

Attach the same patch to re-trigger Jenkins.  As we reach a agreement to go 
ahead with normal bug fixes HDFS-12615, I will commit this once Jenkins says 
okay.


was (Author: linyiqun):
Attach the same patch to re-trigger Jenkins. Will commit this once Jenkins says 
okay.

> RBF: If a directory exceeds quota limit then quota usage is not refreshed for 
> other mount entries 
> --
>
> Key: HDFS-13528
> URL: https://issues.apache.org/jira/browse/HDFS-13528
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Major
> Attachments: HDFS-13528-000.patch, HDFS-13528-001.patch, 
> HDFS-13528.002.patch
>
>
> If quota limit is exceeded, RouterQuotaUpdateService#periodicInvoke is 
> getting QuotaExceededException and it is not updating the quota usage for 
> rest of the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13528) RBF: If a directory exceeds quota limit then quota usage is not refreshed for other mount entries

2018-07-03 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532166#comment-16532166
 ] 

Yiqun Lin commented on HDFS-13528:
--

Attach the same patch to re-trigger Jenkins. Will commit this once Jenkins says 
okay.

> RBF: If a directory exceeds quota limit then quota usage is not refreshed for 
> other mount entries 
> --
>
> Key: HDFS-13528
> URL: https://issues.apache.org/jira/browse/HDFS-13528
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Major
> Attachments: HDFS-13528-000.patch, HDFS-13528-001.patch, 
> HDFS-13528.002.patch
>
>
> If quota limit is exceeded, RouterQuotaUpdateService#periodicInvoke is 
> getting QuotaExceededException and it is not updating the quota usage for 
> rest of the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13528) RBF: If a directory exceeds quota limit then quota usage is not refreshed for other mount entries

2018-07-03 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13528:
-
Attachment: HDFS-13528.002.patch

> RBF: If a directory exceeds quota limit then quota usage is not refreshed for 
> other mount entries 
> --
>
> Key: HDFS-13528
> URL: https://issues.apache.org/jira/browse/HDFS-13528
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Major
> Attachments: HDFS-13528-000.patch, HDFS-13528-001.patch, 
> HDFS-13528.002.patch
>
>
> If quota limit is exceeded, RouterQuotaUpdateService#periodicInvoke is 
> getting QuotaExceededException and it is not updating the quota usage for 
> rest of the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-187) Command status publisher for datanode

2018-07-03 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532146#comment-16532146
 ] 

genericqa commented on HDDS-187:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
47s{color} | {color:red} hadoop-hdds/container-service generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-hdds_container-service generated 2 new + 4 
unchanged - 0 fixed = 6 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
10s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
14s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/container-service |
|  |  String is incompatible with expected argument type Long in 
org.apache.hadoop.ozone.container.common.statemachine.StateContext.getCommandStatusMap(String)
  At StateContext.java:argument type Long in 

[jira] [Commented] (HDDS-217) Move all SCMEvents to a package

2018-07-03 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532136#comment-16532136
 ] 

genericqa commented on HDDS-217:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
51s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
14s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-217 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930207/HDDS-217.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 018af727d44c 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7ca4f0c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/429/testReport/ |
| Max. process+thread count | 303 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/429/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Move all SCMEvents to a package
> ---
>
> Key: HDDS-217
> URL: https://issues.apache.org/jira/browse/HDDS-217
> Project: Hadoop Distributed 

[jira] [Commented] (HDDS-187) Command status publisher for datanode

2018-07-03 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532095#comment-16532095
 ] 

Ajay Kumar commented on HDDS-187:
-

[~xyao] thanks for review. Attached patch v5 with following changes:
# Changed cmdId to long (generated from a Util class)
# Updated StateContext#cmdMap contains mutable CommandStatus as values.
# HeartbeatEndpointTask Line 191/204/215 moved to StateContext#addCommand()
# CommandHandler command Status update moved to StateContext#updateCommandStatus
# ozone.command.status.report.interval renamed to 
hdds.command.status.report.interval
# StateContext L291 has documentation to warn users about null return. 
# Added api to remove commandStatus object in StateContext.
# Fixed failing TestReportPublisher.


> Command status publisher for datanode
> -
>
> Key: HDDS-187
> URL: https://issues.apache.org/jira/browse/HDDS-187
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-187.00.patch, HDDS-187.01.patch, HDDS-187.02.patch, 
> HDDS-187.03.patch, HDDS-187.04.patch, HDDS-187.05.patch
>
>
> Currently SCM sends set of commands for DataNode. DataNode executes them via 
> CommandHandler. This jira intends to create a Command status publisher which 
> will return status of these commands back to the SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-217) Move all SCMEvents to a package

2018-07-03 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-217:
--
Attachment: HDDS-217.001.patch

> Move all SCMEvents to a package
> ---
>
> Key: HDDS-217
> URL: https://issues.apache.org/jira/browse/HDDS-217
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-217.001.patch
>
>
> Moving all SCM internal events to a single package; then it is easy to write 
> event producers and consumers easily. Also, we have a single location for all 
> the events. This patch is a simple refactoring patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-217) Move all SCMEvents to a package

2018-07-03 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-217:
--
Status: Patch Available  (was: Open)

> Move all SCMEvents to a package
> ---
>
> Key: HDDS-217
> URL: https://issues.apache.org/jira/browse/HDDS-217
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-217.001.patch
>
>
> Moving all SCM internal events to a single package; then it is easy to write 
> event producers and consumers easily. Also, we have a single location for all 
> the events. This patch is a simple refactoring patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-217) Move all SCMEvents to a package

2018-07-03 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-217:
-

 Summary: Move all SCMEvents to a package
 Key: HDDS-217
 URL: https://issues.apache.org/jira/browse/HDDS-217
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM
Reporter: Anu Engineer
Assignee: Anu Engineer
 Fix For: 0.2.1


Moving all SCM internal events to a single package; then it is easy to write 
event producers and consumers easily. Also, we have a single location for all 
the events. This patch is a simple refactoring patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-187) Command status publisher for datanode

2018-07-03 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-187:

Attachment: HDDS-187.05.patch

> Command status publisher for datanode
> -
>
> Key: HDDS-187
> URL: https://issues.apache.org/jira/browse/HDDS-187
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-187.00.patch, HDDS-187.01.patch, HDDS-187.02.patch, 
> HDDS-187.03.patch, HDDS-187.04.patch, HDDS-187.05.patch
>
>
> Currently SCM sends set of commands for DataNode. DataNode executes them via 
> CommandHandler. This jira intends to create a Command status publisher which 
> will return status of these commands back to the SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks

2018-07-03 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-13310:
--
Status: Patch Available  (was: Open)

> [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup 
> blocks
> --
>
> Key: HDFS-13310
> URL: https://issues.apache.org/jira/browse/HDFS-13310
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13310-HDFS-12090.001.patch, 
> HDFS-13310-HDFS-12090.002.patch, HDFS-13310-HDFS-12090.003.patch
>
>
> As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands 
> in the heartbeat response that instructs it to backup a block.
> This should take the form of two sub commands: PUT_FILE (when the file is <=1 
> block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see 
> HDFS-13186).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks

2018-07-03 Thread Virajith Jalaparti (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532088#comment-16532088
 ] 

Virajith Jalaparti commented on HDFS-13310:
---

Thanks for posting this [~ehiggs]. I made the following modifications in the 
patch and posted  [^HDFS-13310-HDFS-12090.003.patch].
- Formatted newly added code to fit into the 80 characters.
- reverted unnecessary changes to Datanode.java
- Added javadoc for BulkSyncTaskExecutionFeedback in 
DatanodeProtocol#sendHeartbeat
- I didn't see a reason to use {{Pair}} in the constructor of 
SyncTaskExecutionResult. I removed this.

A couple of comments:
- Can we add javadoc to all the new messages introduced in 
{{DatanodeProtocol.proto}}, and all newly added classes (*SyncTask*).
- Any particular reason for static imports in PBHelper.java? If not, I would 
prefer not declaring these as static imports.


> [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup 
> blocks
> --
>
> Key: HDFS-13310
> URL: https://issues.apache.org/jira/browse/HDFS-13310
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13310-HDFS-12090.001.patch, 
> HDFS-13310-HDFS-12090.002.patch, HDFS-13310-HDFS-12090.003.patch
>
>
> As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands 
> in the heartbeat response that instructs it to backup a block.
> This should take the form of two sub commands: PUT_FILE (when the file is <=1 
> block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see 
> HDFS-13186).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks

2018-07-03 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-13310:
--
Attachment: HDFS-13310-HDFS-12090.003.patch

> [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup 
> blocks
> --
>
> Key: HDFS-13310
> URL: https://issues.apache.org/jira/browse/HDFS-13310
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13310-HDFS-12090.001.patch, 
> HDFS-13310-HDFS-12090.002.patch, HDFS-13310-HDFS-12090.003.patch
>
>
> As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands 
> in the heartbeat response that instructs it to backup a block.
> This should take the form of two sub commands: PUT_FILE (when the file is <=1 
> block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see 
> HDFS-13186).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks

2018-07-03 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-13310:
--
Status: Open  (was: Patch Available)

> [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup 
> blocks
> --
>
> Key: HDFS-13310
> URL: https://issues.apache.org/jira/browse/HDFS-13310
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13310-HDFS-12090.001.patch, 
> HDFS-13310-HDFS-12090.002.patch, HDFS-13310-HDFS-12090.003.patch
>
>
> As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands 
> in the heartbeat response that instructs it to backup a block.
> This should take the form of two sub commands: PUT_FILE (when the file is <=1 
> block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see 
> HDFS-13186).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-175) Refactor ContainerInfo to remove Pipeline object from it

2018-07-03 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532086#comment-16532086
 ] 

Ajay Kumar commented on HDDS-175:
-

[~nandakumar131], [~xyao], [~shashikant] thanks for reviews. [~anu] thanks for 
commit and review.

> Refactor ContainerInfo to remove Pipeline object from it 
> -
>
> Key: HDDS-175
> URL: https://issues.apache.org/jira/browse/HDDS-175
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-175.00.patch, HDDS-175.01.patch, HDDS-175.02.patch, 
> HDDS-175.03.patch, HDDS-175.04.patch, HDDS-175.05.patch, HDDS-175.06.patch, 
> HDDS-175.07.patch, HDDS-175.08.patch, HDDS-175.09.patch, HDDS-175.10.patch, 
> HDDS-175.11.patch
>
>
> Refactor ContainerInfo to remove Pipeline object from it. We can add below 4 
> fields to ContainerInfo to recreate pipeline if required:
> # pipelineId
> # replication type
> # expected replication count
> # DataNode where its replica exist



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13186) [PROVIDED Phase 2] Multipart Uploader API

2018-07-03 Thread Aaron Fabbri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532066#comment-16532066
 ] 

Aaron Fabbri commented on HDFS-13186:
-

Hey folks. Catching up on stuff after vacation. Took a quick look at this.  
Couple comments:

 
 # Thanks for keeping the API backend-agnostic (good layering)
 # What is the motivation for this?  Even if not part of FileSystem it is more 
surface area we need to deal with.

Looks like you have the basic idea of how to update S3Guard.  Think of the 
S3Guard MetadataStore as a "trailing log of metadata changes made to the 
underlying bucket". 

> [PROVIDED Phase 2] Multipart Uploader API
> -
>
> Key: HDFS-13186
> URL: https://issues.apache.org/jira/browse/HDFS-13186
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13186.001.patch, HDFS-13186.002.patch, 
> HDFS-13186.003.patch, HDFS-13186.004.patch, HDFS-13186.005.patch, 
> HDFS-13186.006.patch, HDFS-13186.007.patch, HDFS-13186.008.patch, 
> HDFS-13186.009.patch, HDFS-13186.010.patch
>
>
> To write files in parallel to an external storage system as in HDFS-12090, 
> there are two approaches:
>  # Naive approach: use a single datanode per file that copies blocks locally 
> as it streams data to the external service. This requires a copy for each 
> block inside the HDFS system and then a copy for the block to be sent to the 
> external system.
>  # Better approach: Single point (e.g. Namenode or SPS style external client) 
> and Datanodes coordinate in a multipart - multinode upload.
> This system needs to work with multiple back ends and needs to coordinate 
> across the network. So we propose an API that resembles the following:
> {code:java}
> public UploadHandle multipartInit(Path filePath) throws IOException;
> public PartHandle multipartPutPart(InputStream inputStream,
> int partNumber, UploadHandle uploadId) throws IOException;
> public void multipartComplete(Path filePath,
> List> handles, 
> UploadHandle multipartUploadId) throws IOException;{code}
> Here, UploadHandle and PartHandle are opaque handlers in the vein of 
> PathHandle so they can be serialized and deserialized in hadoop-hdfs project 
> without knowledge of how to deserialize e.g. S3A's version of a UpoadHandle 
> and PartHandle.
> In an object store such as S3A, the implementation is straight forward. In 
> the case of writing multipart/multinode to HDFS, we can write each block as a 
> file part. The complete call will perform a concat on the blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-216) TestStorageContainerManagerHttpServer uses hard-coded port

2018-07-03 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-216:
--

 Summary: TestStorageContainerManagerHttpServer uses hard-coded port
 Key: HDDS-216
 URL: https://issues.apache.org/jira/browse/HDDS-216
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Reporter: Arpit Agarwal
 Fix For: 0.2.1


TestStorageContainerManagerHttpServer fails if port 9876 is in use.

{code}
[INFO] Running org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
[ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.084 s 
<<< FAILURE! - in 
org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
[ERROR] 
testHttpPolicy[0](org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer)
  Time elapsed: 0.401 s  <<< ERROR!
java.net.BindException: Port in use: 0.0.0.0:9876
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13717) Webhdfs api failed while uploading a tar ball

2018-07-03 Thread Yesha Vora (JIRA)
Yesha Vora created HDFS-13717:
-

 Summary: Webhdfs api failed while uploading a tar ball
 Key: HDFS-13717
 URL: https://issues.apache.org/jira/browse/HDFS-13717
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.1.1
Reporter: Yesha Vora


Webhdfs call failed while uploading a tarball to HDFS.

{code}
in _run_command
raise WebHDFSCallException(err_msg, result_dict)
resource_management.libraries.providers.hdfs_resource.WebHDFSCallException: 
Execution of 'curl -sS -L -w '%{http_code}' -X PUT --data-binary 
@/xxx/hbase.tar.gz -H 'Content-Type: application/octet-stream' --negotiate -u : 
-k 
'https://xxx:20470/webhdfs/v1/hdp/apps/xxx/hbase/hbase.tar.gz?op=CREATE=True=444''
 returned status_code=403. 
{
  "RemoteException": {
"exception": "IOException", 
"javaClassName": "java.io.IOException", 
"message": "BP-1336964058-xxx.xx.xx.xxx-1530584337961:blk_1073742195_1371 
does not exist.\n\tat 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:5209)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.bumpBlockGenerationStamp(FSNamesystem.java:5281)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:947)\n\tat
 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:1096)\n\tat
 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat
 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)\n\tat
 org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)\n\tat 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)\n\tat 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)\n\tat 
java.security.AccessController.doPrivileged(Native Method)\n\tat 
javax.security.auth.Subject.doAs(Subject.java:422)\n\tat 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)\n\tat
 org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)\n"
  }
}
 {code}

The tarball is uploaded to HDFS successfully. However, webhdfs api returns with 
HTTP 403.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-175) Refactor ContainerInfo to remove Pipeline object from it

2018-07-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531982#comment-16531982
 ] 

Hudson commented on HDDS-175:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14523 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14523/])
HDDS-175. Refactor ContainerInfo to remove Pipeline object from it. (aengineer: 
rev 7ca4f0cefa220c752920822c8d16469ab3b09b37)
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/InfoContainerHandler.java
* (edit) 
hadoop-hdds/common/src/main/proto/StorageContainerLocationProtocol.proto
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/StorageContainerLocationProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/DeleteContainerHandler.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/CloseContainerEventHandler.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/ratis/RatisManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestXceiverClientMetrics.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/protocolPB/StorageContainerLocationProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestDeletedBlockLog.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupInputStream.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManagerHelper.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestXceiverClientManager.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSmallFile.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestContainerOperations.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/standalone/StandaloneManagerImpl.java
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/CloseContainerHandler.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/PipelineManager.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerByPipeline.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestContainerPlacement.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ksm/TestContainerReportWithKeys.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/closer/TestContainerCloser.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/scm/cli/SQLCLI.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClient.java
* (edit) hadoop-hdds/common/src/main/proto/hdds.proto
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMCli.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ContainerInfo.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocol.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/protocolPB/OzonePBHelper.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerMapping.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestAllocateContainer.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ContainerWithPipeline.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestCloseContainerEventHandler.java
* (edit) 

[jira] [Commented] (HDFS-13614) DN failed to connect with NN because of NPE in SocketIOWithTimeout

2018-07-03 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531968#comment-16531968
 ] 

Wei-Chiu Chuang commented on HDFS-13614:


Here's the initial NPE I saw at Impala side:

{noformat}
I0517 21:14:24.498630 1781723 jni-util.cc:176] 
org.apache.impala.common.ImpalaRuntimeException: UDF::evaluate() ran into a 
problem.
at 
org.apache.impala.hive.executor.UdfExecutor.evaluate(UdfExecutor.java:291)
Caused by: org.apache.impala.common.ImpalaRuntimeException: UDF failed to 
evaluate
at 
org.apache.impala.hive.executor.UdfExecutor.evaluate(UdfExecutor.java:361)
at 
org.apache.impala.hive.executor.UdfExecutor.evaluate(UdfExecutor.java:288)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.impala.hive.executor.UdfExecutor.evaluate(UdfExecutor.java:353)
... 1 more
Caused by: java.lang.NullPointerException
at java.util.LinkedList$ListItr.next(LinkedList.java:893)
at 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.trimIdleSelectors(SocketIOWithTimeout.java:447)
at 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.release(SocketIOWithTimeout.java:429)
at 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:373)
at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:209)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:171)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
at 
org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:207)
at 
org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:156)
at 
org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:788)
at 
org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:844)
at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:904)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:954)
...
{noformat}

> DN failed to connect with NN because of NPE in SocketIOWithTimeout
> --
>
> Key: HDFS-13614
> URL: https://issues.apache.org/jira/browse/HDFS-13614
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2
>Reporter: Surendra Singh Lilhore
>Priority: Major
>
> {{LinkedList$ListItr.next()}} is throwing NPE in {{SocketIOWithTimeout}}. 
> Because of this socket connections are failing. It may be java bug also..
> [https://bugs.java.com/bugdatabase/view_bug.do?bug_id=8133715]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-167) Rename KeySpaceManager to OzoneManager

2018-07-03 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531963#comment-16531963
 ] 

genericqa commented on HDDS-167:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDDS-167 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-167 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930188/HDDS-167.09.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/427/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Rename KeySpaceManager to OzoneManager
> --
>
> Key: HDDS-167
> URL: https://issues.apache.org/jira/browse/HDDS-167
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-167.01.patch, HDDS-167.02.patch, HDDS-167.03.patch, 
> HDDS-167.04.patch, HDDS-167.05.patch, HDDS-167.06.patch, HDDS-167.07.patch, 
> HDDS-167.08.patch, HDDS-167.09.patch
>
>
> The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some 
> more changes needed to complete the rename everywhere e.g.
> - command-line
> - documentation
> - unit tests
> - Acceptance tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-167) Rename KeySpaceManager to OzoneManager

2018-07-03 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-167:
---
Attachment: HDDS-167.09.patch

> Rename KeySpaceManager to OzoneManager
> --
>
> Key: HDDS-167
> URL: https://issues.apache.org/jira/browse/HDDS-167
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-167.01.patch, HDDS-167.02.patch, HDDS-167.03.patch, 
> HDDS-167.04.patch, HDDS-167.05.patch, HDDS-167.06.patch, HDDS-167.07.patch, 
> HDDS-167.08.patch, HDDS-167.09.patch
>
>
> The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some 
> more changes needed to complete the rename everywhere e.g.
> - command-line
> - documentation
> - unit tests
> - Acceptance tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-167) Rename KeySpaceManager to OzoneManager

2018-07-03 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531936#comment-16531936
 ] 

Arpit Agarwal commented on HDDS-167:


Rebased again. Also starting a full unit test run locally.

> Rename KeySpaceManager to OzoneManager
> --
>
> Key: HDDS-167
> URL: https://issues.apache.org/jira/browse/HDDS-167
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-167.01.patch, HDDS-167.02.patch, HDDS-167.03.patch, 
> HDDS-167.04.patch, HDDS-167.05.patch, HDDS-167.06.patch, HDDS-167.07.patch, 
> HDDS-167.08.patch, HDDS-167.09.patch
>
>
> The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some 
> more changes needed to complete the rename everywhere e.g.
> - command-line
> - documentation
> - unit tests
> - Acceptance tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-175) Refactor ContainerInfo to remove Pipeline object from it

2018-07-03 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531932#comment-16531932
 ] 

Anu Engineer edited comment on HDDS-175 at 7/3/18 9:14 PM:
---

[~shashikant], [~nandakumar131], [~xyao] Thanks for the reviews. [~ajayydv] 
Thanks for the contribution. I have committed this patch to trunk.

I have also made sure all acceptance-tests pass with this patch.


was (Author: anu):
[~shashikant], [~nandakumar131], [~xyao] Thanks for the reviews. [~ajayydv] 
Thanks for the contribution. I have committed this patch to trunk.

> Refactor ContainerInfo to remove Pipeline object from it 
> -
>
> Key: HDDS-175
> URL: https://issues.apache.org/jira/browse/HDDS-175
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-175.00.patch, HDDS-175.01.patch, HDDS-175.02.patch, 
> HDDS-175.03.patch, HDDS-175.04.patch, HDDS-175.05.patch, HDDS-175.06.patch, 
> HDDS-175.07.patch, HDDS-175.08.patch, HDDS-175.09.patch, HDDS-175.10.patch, 
> HDDS-175.11.patch
>
>
> Refactor ContainerInfo to remove Pipeline object from it. We can add below 4 
> fields to ContainerInfo to recreate pipeline if required:
> # pipelineId
> # replication type
> # expected replication count
> # DataNode where its replica exist



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-175) Refactor ContainerInfo to remove Pipeline object from it

2018-07-03 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-175:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~shashikant], [~nandakumar131], [~xyao] Thanks for the reviews. [~ajayydv] 
Thanks for the contribution. I have committed this patch to trunk.

> Refactor ContainerInfo to remove Pipeline object from it 
> -
>
> Key: HDDS-175
> URL: https://issues.apache.org/jira/browse/HDDS-175
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-175.00.patch, HDDS-175.01.patch, HDDS-175.02.patch, 
> HDDS-175.03.patch, HDDS-175.04.patch, HDDS-175.05.patch, HDDS-175.06.patch, 
> HDDS-175.07.patch, HDDS-175.08.patch, HDDS-175.09.patch, HDDS-175.10.patch, 
> HDDS-175.11.patch
>
>
> Refactor ContainerInfo to remove Pipeline object from it. We can add below 4 
> fields to ContainerInfo to recreate pipeline if required:
> # pipelineId
> # replication type
> # expected replication count
> # DataNode where its replica exist



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-213) Single lock to synchronize KeyValueContainer#update

2018-07-03 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-213:

Description: When updating the container metadata, the in-memory state and 
on-disk state should be updated under the same lock.

> Single lock to synchronize KeyValueContainer#update
> ---
>
> Key: HDDS-213
> URL: https://issues.apache.org/jira/browse/HDDS-213
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-213-HDDS-48.000.patch
>
>
> When updating the container metadata, the in-memory state and on-disk state 
> should be updated under the same lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-215) Handle Container Already Exists exception on client side

2018-07-03 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-215:
---

 Summary: Handle Container Already Exists exception on client side
 Key: HDDS-215
 URL: https://issues.apache.org/jira/browse/HDDS-215
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Hanisha Koneru


When creating containers on DN, if we get CONTAINER_ALREADY_EXISTS exception, 
it should be handled on the the client side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-213) Single lock to synchronize KeyValueContainer#update

2018-07-03 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-213:

Attachment: HDDS-213-HDDS-48.000.patch

> Single lock to synchronize KeyValueContainer#update
> ---
>
> Key: HDDS-213
> URL: https://issues.apache.org/jira/browse/HDDS-213
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-213-HDDS-48.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-175) Refactor ContainerInfo to remove Pipeline object from it

2018-07-03 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531874#comment-16531874
 ] 

genericqa commented on HDDS-175:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 17 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 30m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 30m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
19s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
20s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} tools in the patch passed. 

[jira] [Commented] (HDFS-13614) DN failed to connect with NN because of NPE in SocketIOWithTimeout

2018-07-03 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531873#comment-16531873
 ] 

Wei-Chiu Chuang commented on HDFS-13614:


The impalad had dozens threads blocked on SelectorPool monitor:

{noformat}
"Thread-588336" #591562 prio=5 os_prio=0 tid=0x7f904b9ad000 nid=0x1b2fb6 
waiting for monitor entry [0x7f8fb1f6b000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.release(SocketIOWithTimeout.java:428)
- waiting to lock <0x80697830> (a 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool)
at 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:373)
at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:209)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:171)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
at 
org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:207)
at 
org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:156)
- locked <0x000589375ce0> (a 
org.apache.hadoop.hdfs.RemoteBlockReader2)
at 
org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:788)
at 
org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:844)
- locked <0x00058931c560> (a org.apache.hadoop.hdfs.DFSInputStream)
at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:904)
- locked <0x00058931c560> (a org.apache.hadoop.hdfs.DFSInputStream)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:954)
- locked <0x00058931c560> (a org.apache.hadoop.hdfs.DFSInputStream)
at java.io.DataInputStream.read(DataInputStream.java:149)

{noformat}

It seems the after the NPE was thrown, the object's monitor didn't get released 
somehow.

> DN failed to connect with NN because of NPE in SocketIOWithTimeout
> --
>
> Key: HDFS-13614
> URL: https://issues.apache.org/jira/browse/HDFS-13614
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2
>Reporter: Surendra Singh Lilhore
>Priority: Major
>
> {{LinkedList$ListItr.next()}} is throwing NPE in {{SocketIOWithTimeout}}. 
> Because of this socket connections are failing. It may be java bug also..
> [https://bugs.java.com/bugdatabase/view_bug.do?bug_id=8133715]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-182) CleanUp Reimplemented classes

2018-07-03 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531867#comment-16531867
 ] 

genericqa commented on HDDS-182:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 17 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
10s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
43s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
51s{color} | {color:red} hadoop-hdds/container-service in HDDS-48 has 4 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
58s{color} | {color:red} hadoop-hdds/container-service generated 1 new + 2 
unchanged - 2 fixed = 3 total (was 4) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
51s{color} | {color:red} hadoop-ozone/tools generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-hdds_container-service generated 0 new + 2 
unchanged - 3 fixed = 2 total (was 5) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} integration-test in the patch passed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m  4s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} tools in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 11s{color} 
| {color:red} integration-test in the patch failed. 

[jira] [Commented] (HDDS-198) Create AuditLogger mechanism to be used by OM, SCM and Datanode

2018-07-03 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531862#comment-16531862
 ] 

Dinesh Chitlangia commented on HDDS-198:


[~anu], [~nandakumar131], [~ajayydv] - Thank you for your guidance and support!

> Create AuditLogger mechanism to be used by OM, SCM and Datanode
> ---
>
> Key: HDDS-198
> URL: https://issues.apache.org/jira/browse/HDDS-198
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: audit, log4j2
> Fix For: 0.2.1
>
> Attachments: HDDS-198.001.patch, HDDS-198.002.patch, 
> HDDS-198.003.patch, HDDS-198.004.patch, HDDS-198.005.patch
>
>
> This Jira tracks the work to create a custom AuditLogger which can be used by 
> OM, SCM, Datanode for auditing read/write events.
> The AuditLogger will be designed using log4j2 and leveraging the MarkerFilter 
> approach to be able to turn on/off audit of read/write events by simply 
> changing the log config.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-212) Introduce NodeStateManager to manage the state of Datanodes in SCM

2018-07-03 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531860#comment-16531860
 ] 

Anu Engineer commented on HDDS-212:
---

+1, we should probably update comments and Java Doc. But feel free to commit 
when you have that done.

> Introduce NodeStateManager to manage the state of Datanodes in SCM
> --
>
> Key: HDDS-212
> URL: https://issues.apache.org/jira/browse/HDDS-212
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-212.000.patch
>
>
> Introducing {{NodeStateManager}} will make the lifecycle management of 
> datanodes in SCM easy. NodeStateManager will be responsible for marking the 
> datanodes as stale or dead when heartbeat is not received and it will 
> maintain the current state of all the datanodes in the cluster. 
> NodeStateManager should be the only place we maintain node state information, 
> everyone else should use NodeStateManager to know about the state information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-198) Create AuditLogger mechanism to be used by OM, SCM and Datanode

2018-07-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531814#comment-16531814
 ] 

Hudson commented on HDDS-198:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14521 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14521/])
HDDS-198. Create AuditLogger mechanism to be used by OM, SCM and (aengineer: 
rev c0ef7e7680d882e2182f48f033109678a48742ab)
* (add) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/audit/package-info.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/audit/OMAction.java
* (add) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/audit/TestOzoneAuditLogger.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/AuditAction.java
* (edit) hadoop-hdds/common/pom.xml
* (add) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/audit/DummyAction.java
* (add) hadoop-hdds/common/src/test/resources/log4j2.properties
* (add) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/audit/DummyEntity.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/AuditMarker.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/audit/package-info.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/AuditEventStatus.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/Auditable.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/package-info.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/AuditLogger.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/AuditLoggerType.java


> Create AuditLogger mechanism to be used by OM, SCM and Datanode
> ---
>
> Key: HDDS-198
> URL: https://issues.apache.org/jira/browse/HDDS-198
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: audit, log4j2
> Fix For: 0.2.1
>
> Attachments: HDDS-198.001.patch, HDDS-198.002.patch, 
> HDDS-198.003.patch, HDDS-198.004.patch, HDDS-198.005.patch
>
>
> This Jira tracks the work to create a custom AuditLogger which can be used by 
> OM, SCM, Datanode for auditing read/write events.
> The AuditLogger will be designed using log4j2 and leveraging the MarkerFilter 
> approach to be able to turn on/off audit of read/write events by simply 
> changing the log config.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-198) Create AuditLogger mechanism to be used by OM, SCM and Datanode

2018-07-03 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-198:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

+1,  [~ajayydv], [~nandakumar131] Thanks for the reviews. [~dineshchitlangia] 
Thanks for the contribution. I have committed this to the trunk.


> Create AuditLogger mechanism to be used by OM, SCM and Datanode
> ---
>
> Key: HDDS-198
> URL: https://issues.apache.org/jira/browse/HDDS-198
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: audit, log4j2
> Fix For: 0.2.1
>
> Attachments: HDDS-198.001.patch, HDDS-198.002.patch, 
> HDDS-198.003.patch, HDDS-198.004.patch, HDDS-198.005.patch
>
>
> This Jira tracks the work to create a custom AuditLogger which can be used by 
> OM, SCM, Datanode for auditing read/write events.
> The AuditLogger will be designed using log4j2 and leveraging the MarkerFilter 
> approach to be able to turn on/off audit of read/write events by simply 
> changing the log config.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-211) Add a create container Lock

2018-07-03 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531787#comment-16531787
 ] 

genericqa commented on HDDS-211:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
 1s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
46s{color} | {color:red} hadoop-hdds/container-service in HDDS-48 has 4 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 54s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.keyvalue.TestKeyValueHandler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-211 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930158/HDDS-211-HDDS-48.01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2bc738d92683 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-48 / e1f4b3b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDDS-Build/426/artifact/out/branch-findbugs-hadoop-hdds_container-service-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/426/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
|  Test Results | 

[jira] [Updated] (HDFS-13381) [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path

2018-07-03 Thread Uma Maheswara Rao G (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-13381:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-10285
   Status: Resolved  (was: Patch Available)

> [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path
> 
>
> Key: HDFS-13381
> URL: https://issues.apache.org/jira/browse/HDFS-13381
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Major
> Fix For: HDFS-10285
>
> Attachments: HDFS-13381-HDFS-10285-00.patch, 
> HDFS-13381-HDFS-10285-01.patch, HDFS-13381-HDFS-10285-02.patch, 
> HDFS-13381-HDFS-10285-03.patch
>
>
> This Jira task will address the following comments:
>  # Use DFSUtilClient::makePathFromFileId, instead of generics(one for string 
> path and another for inodeId) like today.
>  # Only the context impl differs for external/internal sps. Here, it can 
> simply move FileCollector and BlockMoveTaskHandler to Context interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13674) Improve documentation on Metrics

2018-07-03 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531756#comment-16531756
 ] 

genericqa commented on HDFS-13674:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
39m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13674 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930147/HDFS-13674.000.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 623bd633ae36 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 51654a3 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 336 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24549/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve documentation on Metrics
> 
>
> Key: HDFS-13674
> URL: https://issues.apache.org/jira/browse/HDFS-13674
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, metrics
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
> Attachments: HDFS-13674.000.patch
>
>
> There are a few confusing places in the [Hadoop Metrics 
> page|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Metrics.html].
>  For instance, there are duplicated entries such as {{FsImageLoadTime}}; some 
> quantile metrics do not have corresponding entries, description on some 
> quantile metrics are not very specific on what is the {{num}} variable in the 
> metrics name, etc.
> This JIRA targets at improving this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11096) Support rolling upgrade between 2.x and 3.x

2018-07-03 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531753#comment-16531753
 ] 

Brahma Reddy Battula commented on HDFS-11096:
-

Hi [~mackrorysd] ,

Still YARN-6457 might cause for YARN issue (after [~rkanter] latest update 
here) ..?

> Support rolling upgrade between 2.x and 3.x
> ---
>
> Key: HDFS-11096
> URL: https://issues.apache.org/jira/browse/HDFS-11096
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rolling upgrades
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Sean Mackrory
>Priority: Blocker
> Attachments: HDFS-11096.001.patch, HDFS-11096.002.patch, 
> HDFS-11096.003.patch, HDFS-11096.004.patch, HDFS-11096.005.patch, 
> HDFS-11096.006.patch, HDFS-11096.007.patch
>
>
> trunk has a minimum software version of 3.0.0-alpha1. This means we can't 
> rolling upgrade between branch-2 and trunk.
> This is a showstopper for large deployments. Unless there are very compelling 
> reasons to break compatibility, let's restore the ability to rolling upgrade 
> to 3.x releases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-214) HDDS/Ozone First Release

2018-07-03 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-214:
-

 Summary: HDDS/Ozone First Release
 Key: HDDS-214
 URL: https://issues.apache.org/jira/browse/HDDS-214
 Project: Hadoop Distributed Data Store
  Issue Type: New Feature
Reporter: Anu Engineer
Assignee: Elek, Marton


This is an umbrella JIRA that collects all work items, design discussions, etc. 
for Ozone's release. We will post a design document soon to open the discussion 
and nail down the details of the release.

cc: [~xyao] , [~elek], [~arpitagarwal] [~jnp] , [~msingh] [~nandakumar131], 
[~bharatviswa]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-212) Introduce NodeStateManager to manage the state of Datanodes in SCM

2018-07-03 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531738#comment-16531738
 ] 

genericqa commented on HDDS-212:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 33m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-ozone/ozone-manager in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 28m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
55s{color} | {color:red} hadoop-hdds/server-scm generated 3 new + 0 unchanged - 
0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
19s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
20s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} ozone-manager in the patch passed. {color} 

[jira] [Created] (HDDS-213) Single lock to synchronize KeyValueContainer#update

2018-07-03 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-213:
---

 Summary: Single lock to synchronize KeyValueContainer#update
 Key: HDDS-213
 URL: https://issues.apache.org/jira/browse/HDDS-213
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru
 Fix For: 0.2.1






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-75) Ozone: Support CopyContainer

2018-07-03 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-75?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531725#comment-16531725
 ] 

genericqa commented on HDDS-75:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 35m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 34m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 34m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 34m 36s{color} 
| {color:red} root generated 1 new + 1596 unchanged - 0 fixed = 1597 total (was 
1596) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
3s{color} | {color:red} hadoop-hdds/container-service generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m  6s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
40s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}158m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/container-service |
|  |  Should 

[jira] [Comment Edited] (HDDS-182) CleanUp Reimplemented classes

2018-07-03 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16530573#comment-16530573
 ] 

Hanisha Koneru edited comment on HDDS-182 at 7/3/18 5:33 PM:
-

* Fixed the following integration tests in this patch
 ## TestContainerPersistence
 ## TestContainerServer
 ## TestSCMCli - Ignoring this test for now. Will open a new Jira to fix this.
 * Changed containerId to containerID in ContainerData to be consistent with 
naming convention (for eg. clusterID, scmID).
 * Removed restriction of not updating the existing container metadata fields.
 * Fixed TestKeyValueHandler failing in Jenkins run.


was (Author: hanishakoneru):
* Fixed the following integration tests in this patch
*# TestContainerPersistence
*# TestContainerServer
*# TestSCMCli - Ignoring this test for now. Will open a new Jira to fix this.
* Changed containerId to containerID in ContainerData to be consistent with 
naming convention (for eg. clusterID, scmID).
* Removed restriction of not updating the existing container metadata fields.

> CleanUp Reimplemented classes
> -
>
> Key: HDDS-182
> URL: https://issues.apache.org/jira/browse/HDDS-182
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-182-HDDS-48.001.patch
>
>
> Cleanup container-service's ozone.container.common package. The following 
> classes have been refactored and re-implemented. The unused classes/ methods 
> should be cleaned up.
>  # org.apache.hadoop.ozone.container.common.helpers.ChunkUtils
>  # org.apache.hadoop.ozone.container.common.helpers.KeyUtils
>  # org.apache.hadoop.ozone.container.common.helpers.ContainerData
>  # org.apache.hadoop.ozone.container.common.impl.Dispatcher
>  # org.apache.hadoop.ozone.container.common.impl.ChunkManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.KeyManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.Dispatcher
> Also, fix integration tests broken by deleting these classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-182) CleanUp Reimplemented classes

2018-07-03 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-182:

Status: Patch Available  (was: In Progress)

> CleanUp Reimplemented classes
> -
>
> Key: HDDS-182
> URL: https://issues.apache.org/jira/browse/HDDS-182
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-182-HDDS-48.001.patch
>
>
> Cleanup container-service's ozone.container.common package. The following 
> classes have been refactored and re-implemented. The unused classes/ methods 
> should be cleaned up.
>  # org.apache.hadoop.ozone.container.common.helpers.ChunkUtils
>  # org.apache.hadoop.ozone.container.common.helpers.KeyUtils
>  # org.apache.hadoop.ozone.container.common.helpers.ContainerData
>  # org.apache.hadoop.ozone.container.common.impl.Dispatcher
>  # org.apache.hadoop.ozone.container.common.impl.ChunkManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.KeyManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.Dispatcher
> Also, fix integration tests broken by deleting these classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-182) CleanUp Reimplemented classes

2018-07-03 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-182:

Attachment: HDDS-182-HDDS-48.001.patch

> CleanUp Reimplemented classes
> -
>
> Key: HDDS-182
> URL: https://issues.apache.org/jira/browse/HDDS-182
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-182-HDDS-48.001.patch
>
>
> Cleanup container-service's ozone.container.common package. The following 
> classes have been refactored and re-implemented. The unused classes/ methods 
> should be cleaned up.
>  # org.apache.hadoop.ozone.container.common.helpers.ChunkUtils
>  # org.apache.hadoop.ozone.container.common.helpers.KeyUtils
>  # org.apache.hadoop.ozone.container.common.helpers.ContainerData
>  # org.apache.hadoop.ozone.container.common.impl.Dispatcher
>  # org.apache.hadoop.ozone.container.common.impl.ChunkManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.KeyManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.Dispatcher
> Also, fix integration tests broken by deleting these classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-182) CleanUp Reimplemented classes

2018-07-03 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-182:

Attachment: (was: HDDS-182-HDDS-48.000.patch)

> CleanUp Reimplemented classes
> -
>
> Key: HDDS-182
> URL: https://issues.apache.org/jira/browse/HDDS-182
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>
> Cleanup container-service's ozone.container.common package. The following 
> classes have been refactored and re-implemented. The unused classes/ methods 
> should be cleaned up.
>  # org.apache.hadoop.ozone.container.common.helpers.ChunkUtils
>  # org.apache.hadoop.ozone.container.common.helpers.KeyUtils
>  # org.apache.hadoop.ozone.container.common.helpers.ContainerData
>  # org.apache.hadoop.ozone.container.common.impl.Dispatcher
>  # org.apache.hadoop.ozone.container.common.impl.ChunkManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.KeyManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl
>  # org.apache.hadoop.ozone.container.common.impl.Dispatcher
> Also, fix integration tests broken by deleting these classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13524) Occasional "All datanodes are bad" error in TestLargeBlock#testLargeBlockSize

2018-07-03 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531716#comment-16531716
 ] 

Siyao Meng commented on HDFS-13524:
---

[~genericqa] Unrelated flaky tests. Passed locally.

> Occasional "All datanodes are bad" error in TestLargeBlock#testLargeBlockSize
> -
>
> Key: HDFS-13524
> URL: https://issues.apache.org/jira/browse/HDFS-13524
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13524.001.patch, HDFS-13524.002.patch
>
>
> TestLargeBlock#testLargeBlockSize may fail with error:
> {quote}
> All datanodes 
> [DatanodeInfoWithStorage[127.0.0.1:44968,DS-acddd79e-cdf1-4ac5-aac5-e804a2e61600,DISK]]
>  are bad. Aborting...
> {quote}
> Tracing back, the error is due to the stress applied to the host sending a 
> 2GB block, causing write pipeline ack read timeout:
> {quote}
> 2017-09-10 22:16:07,285 [DataXceiver for client 
> DFSClient_NONMAPREDUCE_998779779_9 at /127.0.0.1:57794 [Receiving block 
> BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001]] INFO  
> datanode.DataNode (DataXceiver.java:writeBlock(742)) - Receiving 
> BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001 src: 
> /127.0.0.1:57794 dest: /127.0.0.1:44968
> 2017-09-10 22:16:50,402 [DataXceiver for client 
> DFSClient_NONMAPREDUCE_998779779_9 at /127.0.0.1:57794 [Receiving block 
> BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001]] WARN  
> datanode.DataNode (BlockReceiver.java:flushOrSync(434)) - Slow flushOrSync 
> took 5383ms (threshold=300ms), isSync:false, flushTotalNanos=5383638982ns, 
> volume=file:/tmp/tmp.1oS3ZfDCwq/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/
> 2017-09-10 22:17:54,427 [ResponseProcessor for block 
> BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001] WARN  
> hdfs.DataStreamer (DataStreamer.java:run(1214)) - Exception for 
> BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001
> java.net.SocketTimeoutException: 65000 millis timeout while waiting for 
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
> local=/127.0.0.1:57794 remote=/127.0.0.1:44968]
>   at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
>   at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
>   at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
>   at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
>   at java.io.FilterInputStream.read(FilterInputStream.java:83)
>   at java.io.FilterInputStream.read(FilterInputStream.java:83)
>   at 
> org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:434)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)
>   at 
> org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1104)
> 2017-09-10 22:17:54,432 [DataXceiver for client 
> DFSClient_NONMAPREDUCE_998779779_9 at /127.0.0.1:57794 [Receiving block 
> BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001]] INFO  
> datanode.DataNode (BlockReceiver.java:receiveBlock(1000)) - Exception for 
> BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001
> java.io.IOException: Connection reset by peer
> {quote}
> Instead of raising read timeout, I suggest increasing cluster size from 
> default=1 to 3, so that it has the opportunity to choose a different DN and 
> retry.
> Suspect this fails after HDFS-13103, in Hadoop 2.8/3.0.0-alpha1 when we 
> introduced client acknowledgement read timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-211) Add a create container Lock

2018-07-03 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-211:

Attachment: HDDS-211-HDDS-48.01.patch

> Add a create container Lock
> ---
>
> Key: HDDS-211
> URL: https://issues.apache.org/jira/browse/HDDS-211
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-211-HDDS-48.00.patch, HDDS-211-HDDS-48.01.patch
>
>
> Add a lock to guard multiple creations of the same container.
> When multiple clients, try to create a container with the same containerID, 
> we should succeed for one client, and for remaining clients we should throw 
> StorageContainerException. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-211) Add a create container Lock

2018-07-03 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531707#comment-16531707
 ] 

Bharat Viswanadham commented on HDDS-211:
-

Rebased the patch to apply to HDDS-48.

> Add a create container Lock
> ---
>
> Key: HDDS-211
> URL: https://issues.apache.org/jira/browse/HDDS-211
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-211-HDDS-48.00.patch, HDDS-211-HDDS-48.01.patch
>
>
> Add a lock to guard multiple creations of the same container.
> When multiple clients, try to create a container with the same containerID, 
> we should succeed for one client, and for remaining clients we should throw 
> StorageContainerException. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-211) Add a create container Lock

2018-07-03 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-211:

Status: Patch Available  (was: Open)

> Add a create container Lock
> ---
>
> Key: HDDS-211
> URL: https://issues.apache.org/jira/browse/HDDS-211
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-211-HDDS-48.00.patch, HDDS-211-HDDS-48.01.patch
>
>
> Add a lock to guard multiple creations of the same container.
> When multiple clients, try to create a container with the same containerID, 
> we should succeed for one client, and for remaining clients we should throw 
> StorageContainerException. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-211) Add a create container Lock

2018-07-03 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-211:

Status: Open  (was: Patch Available)

> Add a create container Lock
> ---
>
> Key: HDDS-211
> URL: https://issues.apache.org/jira/browse/HDDS-211
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-211-HDDS-48.00.patch
>
>
> Add a lock to guard multiple creations of the same container.
> When multiple clients, try to create a container with the same containerID, 
> we should succeed for one client, and for remaining clients we should throw 
> StorageContainerException. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-176) Add keyCount and container maximum size to ContainerData

2018-07-03 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-176:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add keyCount and container maximum size to ContainerData
> 
>
> Key: HDDS-176
> URL: https://issues.apache.org/jira/browse/HDDS-176
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-176-HDDS-48.00.patch, HDDS-176-HDDS-48.01.patch
>
>
> # ContainerData, should hold container maximum size, and this should be 
> serialized into .container file. This is needed because after some time, 
> container size can be changed. So, old containers will have different max 
> size than the newly created containers.
>  # And also add KeyCount which says the number of keys in the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-205) Add metrics to HddsDispatcher

2018-07-03 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-205:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add metrics to HddsDispatcher
> -
>
> Key: HDDS-205
> URL: https://issues.apache.org/jira/browse/HDDS-205
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-205-HDDS-48.00.patch, HDDS-205-HDDS-48.01.patch, 
> HDDS-205-HDDS-48.02.patch, HDDS-205-HDDS-48.03.patch
>
>
> This patch adds metrics to newly added HddsDispatcher.
> This uses, already existing ContainerMetrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-175) Refactor ContainerInfo to remove Pipeline object from it

2018-07-03 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531679#comment-16531679
 ] 

Ajay Kumar commented on HDDS-175:
-

patch v11 to address failure in TestContainerStateManager. 
TestStorageContainerManager passed locally. (timed out in yetus run)

> Refactor ContainerInfo to remove Pipeline object from it 
> -
>
> Key: HDDS-175
> URL: https://issues.apache.org/jira/browse/HDDS-175
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-175.00.patch, HDDS-175.01.patch, HDDS-175.02.patch, 
> HDDS-175.03.patch, HDDS-175.04.patch, HDDS-175.05.patch, HDDS-175.06.patch, 
> HDDS-175.07.patch, HDDS-175.08.patch, HDDS-175.09.patch, HDDS-175.10.patch, 
> HDDS-175.11.patch
>
>
> Refactor ContainerInfo to remove Pipeline object from it. We can add below 4 
> fields to ContainerInfo to recreate pipeline if required:
> # pipelineId
> # replication type
> # expected replication count
> # DataNode where its replica exist



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-176) Add keyCount and container maximum size to ContainerData

2018-07-03 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531650#comment-16531650
 ] 

Hanisha Koneru edited comment on HDDS-176 at 7/3/18 4:54 PM:
-

LGTM. +1.

Thanks [~bharatviswa] for the contribution. Committed to HDDS-48 branch.


was (Author: hanishakoneru):
Thanks [~bharatviswa].

LGTM. +1.

> Add keyCount and container maximum size to ContainerData
> 
>
> Key: HDDS-176
> URL: https://issues.apache.org/jira/browse/HDDS-176
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-176-HDDS-48.00.patch, HDDS-176-HDDS-48.01.patch
>
>
> # ContainerData, should hold container maximum size, and this should be 
> serialized into .container file. This is needed because after some time, 
> container size can be changed. So, old containers will have different max 
> size than the newly created containers.
>  # And also add KeyCount which says the number of keys in the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-175) Refactor ContainerInfo to remove Pipeline object from it

2018-07-03 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-175:

Attachment: HDDS-175.11.patch

> Refactor ContainerInfo to remove Pipeline object from it 
> -
>
> Key: HDDS-175
> URL: https://issues.apache.org/jira/browse/HDDS-175
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-175.00.patch, HDDS-175.01.patch, HDDS-175.02.patch, 
> HDDS-175.03.patch, HDDS-175.04.patch, HDDS-175.05.patch, HDDS-175.06.patch, 
> HDDS-175.07.patch, HDDS-175.08.patch, HDDS-175.09.patch, HDDS-175.10.patch, 
> HDDS-175.11.patch
>
>
> Refactor ContainerInfo to remove Pipeline object from it. We can add below 4 
> fields to ContainerInfo to recreate pipeline if required:
> # pipelineId
> # replication type
> # expected replication count
> # DataNode where its replica exist



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-205) Add metrics to HddsDispatcher

2018-07-03 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531669#comment-16531669
 ] 

Hanisha Koneru commented on HDDS-205:
-

Committed to HDDS-48 branch. Thanks [~bharatviswa] for the contribution and 
[~shashikant] for reviews.

> Add metrics to HddsDispatcher
> -
>
> Key: HDDS-205
> URL: https://issues.apache.org/jira/browse/HDDS-205
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-205-HDDS-48.00.patch, HDDS-205-HDDS-48.01.patch, 
> HDDS-205-HDDS-48.02.patch, HDDS-205-HDDS-48.03.patch
>
>
> This patch adds metrics to newly added HddsDispatcher.
> This uses, already existing ContainerMetrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-176) Add keyCount and container maximum size to ContainerData

2018-07-03 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531650#comment-16531650
 ] 

Hanisha Koneru commented on HDDS-176:
-

Thanks [~bharatviswa].

LGTM. +1.

> Add keyCount and container maximum size to ContainerData
> 
>
> Key: HDDS-176
> URL: https://issues.apache.org/jira/browse/HDDS-176
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-176-HDDS-48.00.patch, HDDS-176-HDDS-48.01.patch
>
>
> # ContainerData, should hold container maximum size, and this should be 
> serialized into .container file. This is needed because after some time, 
> container size can be changed. So, old containers will have different max 
> size than the newly created containers.
>  # And also add KeyCount which says the number of keys in the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13674) Improve documentation on Metrics

2018-07-03 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13674:

Attachment: HDFS-13674.000.patch

> Improve documentation on Metrics
> 
>
> Key: HDFS-13674
> URL: https://issues.apache.org/jira/browse/HDFS-13674
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, metrics
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
> Attachments: HDFS-13674.000.patch
>
>
> There are a few confusing places in the [Hadoop Metrics 
> page|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Metrics.html].
>  For instance, there are duplicated entries such as {{FsImageLoadTime}}; some 
> quantile metrics do not have corresponding entries, description on some 
> quantile metrics are not very specific on what is the {{num}} variable in the 
> metrics name, etc.
> This JIRA targets at improving this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13674) Improve documentation on Metrics

2018-07-03 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13674:

Status: Patch Available  (was: Open)

> Improve documentation on Metrics
> 
>
> Key: HDFS-13674
> URL: https://issues.apache.org/jira/browse/HDFS-13674
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, metrics
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
> Attachments: HDFS-13674.000.patch
>
>
> There are a few confusing places in the [Hadoop Metrics 
> page|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Metrics.html].
>  For instance, there are duplicated entries such as {{FsImageLoadTime}}; some 
> quantile metrics do not have corresponding entries, description on some 
> quantile metrics are not very specific on what is the {{num}} variable in the 
> metrics name, etc.
> This JIRA targets at improving this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13674) Improve documentation on Metrics

2018-07-03 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531648#comment-16531648
 ] 

Chao Sun commented on HDFS-13674:
-

Sorry for the delay [~linyiqun]. Submitted patch v0 and it will be great if you 
can take a look.

> Improve documentation on Metrics
> 
>
> Key: HDFS-13674
> URL: https://issues.apache.org/jira/browse/HDFS-13674
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, metrics
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
> Attachments: HDFS-13674.000.patch
>
>
> There are a few confusing places in the [Hadoop Metrics 
> page|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Metrics.html].
>  For instance, there are duplicated entries such as {{FsImageLoadTime}}; some 
> quantile metrics do not have corresponding entries, description on some 
> quantile metrics are not very specific on what is the {{num}} variable in the 
> metrics name, etc.
> This JIRA targets at improving this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-205) Add metrics to HddsDispatcher

2018-07-03 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531635#comment-16531635
 ] 

Hanisha Koneru commented on HDDS-205:
-

Patch v03 LGTM. +1. 

The failed unit tests will be fixed in HDDS-204. We can go ahead and commit 
this patch.

> Add metrics to HddsDispatcher
> -
>
> Key: HDDS-205
> URL: https://issues.apache.org/jira/browse/HDDS-205
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-205-HDDS-48.00.patch, HDDS-205-HDDS-48.01.patch, 
> HDDS-205-HDDS-48.02.patch, HDDS-205-HDDS-48.03.patch
>
>
> This patch adds metrics to newly added HddsDispatcher.
> This uses, already existing ContainerMetrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13710) RBF: setQuota and getQuotaUsage should check the dfs.federation.router.quota.enable

2018-07-03 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531634#comment-16531634
 ] 

Íñigo Goiri commented on HDFS-13710:


I think the test might be a little bit too expensive (28 seconds) for what it 
is giving us.
I'm not sure there is a point starting a full cluster just for this.
I think we should just start a Router and mock whatever is needed.

A couple minor nits:
* I'm not sure there's a point having this comment {{// check if quota is 
enabled in Router}}; it looks pretty obvious.
* We should do the checkOperation as the first operation even if the quota is 
not enabled.

> RBF:  setQuota and getQuotaUsage should check the 
> dfs.federation.router.quota.enable
> 
>
> Key: HDFS-13710
> URL: https://issues.apache.org/jira/browse/HDFS-13710
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 2.9.1, 3.0.3
>Reporter: yanghuafeng
>Priority: Major
> Attachments: HDFS-13710.002.patch, HDFS-13710.003.patch, 
> HDFS-13710.004.patch, HDFS-13710.005.patch, HDFS-13710.patch
>
>
> when I use the command below, some exceptions happened.
>  
> {code:java}
> hdfs dfsrouteradmin -setQuota /tmp -ssQuota 1G 
> {code}
>  the logs follow.
> {code:java}
> Successfully set quota for mount point /tmp
> {code}
> It looks like the quota is set successfully, but some exceptions happen in 
> the rbf server log.
> {code:java}
> java.io.IOException: No remote locations available
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1002)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:967)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:940)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:84)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:255)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:238)
> at 
> org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocolServerSideTranslatorPB.updateMountTableEntry(RouterAdminProtocolServerSideTranslatorPB.java:179)
> at 
> org.apache.hadoop.hdfs.protocol.proto.RouterProtocolProtos$RouterAdminProtocolService$2.callBlockingMethod(RouterProtocolProtos.java:259)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> {code}
> I find the dfs.federation.router.quota.enable is false by default. And it 
> causes the problem. I think we should check the parameter when we call 
> setQuota and getQuotaUsage. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13716) hdfs.DFSclient should log KMS DT acquisition at INFO level

2018-07-03 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531619#comment-16531619
 ] 

genericqa commented on HDFS-13716:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 25m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 36s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
12s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13716 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930123/HDFS-13716.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e62da2690e7c 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 344f324 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24548/testReport/ |
| Max. process+thread count | 1429 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24548/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> hdfs.DFSclient should log KMS DT acquisition 

[jira] [Commented] (HDFS-12615) Router-based HDFS federation phase 2

2018-07-03 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531595#comment-16531595
 ] 

Anu Engineer commented on HDFS-12615:
-

+1, Please go ahead with the fixes.

> Router-based HDFS federation phase 2
> 
>
> Key: HDFS-12615
> URL: https://issues.apache.org/jira/browse/HDFS-12615
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
>  Labels: RBF
>
> This umbrella JIRA tracks set of improvements over the Router-based HDFS 
> federation (HDFS-10467).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-179) CloseContainer command should be executed only if all the prior "Write" type container requests get executed

2018-07-03 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531535#comment-16531535
 ] 

Shashikant Banerjee commented on HDDS-179:
--

Patch v1 depends upon HDDS-48 . Will be submitting it once HDDS-48 gets merged 
to trunk.

> CloseContainer command should be executed only if all the  prior "Write" type 
> container requests get executed
> -
>
> Key: HDDS-179
> URL: https://issues.apache.org/jira/browse/HDDS-179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-179.01.patch
>
>
> When a close Container command request comes to a Datanode (via SCM hearbeat 
> response) through the Ratis protocol, all the prior enqueued "Write" type of 
> request like putKey, WriteChunk, DeleteKey, CompactChunk etc should be 
> executed first before CloseContainer request gets executed. This 
> synchronization needs to be handled in the containerStateMachine. This Jira 
> aims to address this.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-179) CloseContainer command should be executed only if all the prior "Write" type container requests get executed

2018-07-03 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-179:
-
Attachment: HDDS-179.01.patch

> CloseContainer command should be executed only if all the  prior "Write" type 
> container requests get executed
> -
>
> Key: HDDS-179
> URL: https://issues.apache.org/jira/browse/HDDS-179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-179.01.patch
>
>
> When a close Container command request comes to a Datanode (via SCM hearbeat 
> response) through the Ratis protocol, all the prior enqueued "Write" type of 
> request like putKey, WriteChunk, DeleteKey, CompactChunk etc should be 
> executed first before CloseContainer request gets executed. This 
> synchronization needs to be handled in the containerStateMachine. This Jira 
> aims to address this.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13710) RBF: setQuota and getQuotaUsage should check the dfs.federation.router.quota.enable

2018-07-03 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531515#comment-16531515
 ] 

genericqa commented on HDFS-13710:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 34m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
39s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13710 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930114/HDFS-13710.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 685ee827aba9 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 344f324 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24547/testReport/ |
| Max. process+thread count | 955 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24547/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF:  setQuota and getQuotaUsage should check the 
> dfs.federation.router.quota.enable
> 

[jira] [Updated] (HDDS-212) Introduce NodeStateManager to manage the state of Datanodes in SCM

2018-07-03 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-212:
-
Attachment: HDDS-212.000.patch

> Introduce NodeStateManager to manage the state of Datanodes in SCM
> --
>
> Key: HDDS-212
> URL: https://issues.apache.org/jira/browse/HDDS-212
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-212.000.patch
>
>
> Introducing {{NodeStateManager}} will make the lifecycle management of 
> datanodes in SCM easy. NodeStateManager will be responsible for marking the 
> datanodes as stale or dead when heartbeat is not received and it will 
> maintain the current state of all the datanodes in the cluster. 
> NodeStateManager should be the only place we maintain node state information, 
> everyone else should use NodeStateManager to know about the state information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-212) Introduce NodeStateManager to manage the state of Datanodes in SCM

2018-07-03 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-212:
-
Affects Version/s: 0.2.1
   Status: Patch Available  (was: Open)

> Introduce NodeStateManager to manage the state of Datanodes in SCM
> --
>
> Key: HDDS-212
> URL: https://issues.apache.org/jira/browse/HDDS-212
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-212.000.patch
>
>
> Introducing {{NodeStateManager}} will make the lifecycle management of 
> datanodes in SCM easy. NodeStateManager will be responsible for marking the 
> datanodes as stale or dead when heartbeat is not received and it will 
> maintain the current state of all the datanodes in the cluster. 
> NodeStateManager should be the only place we maintain node state information, 
> everyone else should use NodeStateManager to know about the state information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-210) ozone getKey command always expects the filename to be present along with file-path in "-file" argument

2018-07-03 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-210:
-
Attachment: HDDS-210.001.patch

> ozone getKey command always expects the filename to be present along with 
> file-path in "-file" argument
> ---
>
> Key: HDDS-210
> URL: https://issues.apache.org/jira/browse/HDDS-210
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
> Environment:  
>  
>Reporter: Nilotpal Nandi
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-210.001.patch
>
>
> ozone getKey command always expects the filename to be present along with the 
> file-path for the "-file" argument.
> It throws error if  filename is not provided.
>  
> {noformat}
> [root@ozone-vm bin]# ./ozone oz -getKey /nnvolume1/bucket123/passwd -file 
> /test1/
> 2018-07-02 06:45:27,355 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> Command Failed : {"httpCode":0,"shortMessage":"/test1/exists. Download will 
> overwrite an existing file. 
> Aborting.","resource":null,"message":"/test1/exists. Download will overwrite 
> an existing file. Aborting.","requestID":null,"hostName":null}
> [root@ozone-vm bin]# ./ozone oz -getKey /nnvolume1/bucket123/passwd -file 
> /test1/passwd
> 2018-07-02 06:45:39,722 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2018-07-02 06:45:40,354 INFO conf.ConfUtils: raft.rpc.type = GRPC (default)
> 2018-07-02 06:45:40,366 INFO conf.ConfUtils: raft.grpc.message.size.max = 
> 33554432 (custom)
> 2018-07-02 06:45:40,372 INFO conf.ConfUtils: raft.client.rpc.retryInterval = 
> 300 ms (default)
> 2018-07-02 06:45:40,374 INFO conf.ConfUtils: 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-07-02 06:45:40,374 INFO conf.ConfUtils: 
> raft.client.async.scheduler-threads = 3 (default)
> 2018-07-02 06:45:40,507 INFO conf.ConfUtils: raft.grpc.flow.control.window = 
> 1MB (=1048576) (default)
> 2018-07-02 06:45:40,507 INFO conf.ConfUtils: raft.grpc.message.size.max = 
> 33554432 (custom)
> 2018-07-02 06:45:40,814 INFO conf.ConfUtils: raft.client.rpc.request.timeout 
> = 3000 ms (default){noformat}
>  
> Expectation :
> --
> ozone getKey should work even when only file-path is provided (without 
> filename). It should create a file in the given file-path with its key's name 
> as its name.
> i.e,
> given , /test1 is a directory .
> if  ./ozone oz -getKey /nnvolume1/bucket123/passwd -file /test1  is run,
> file 'passwd' should be created in the directory /test1 .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-75) Ozone: Support CopyContainer

2018-07-03 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-75?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-75:
---
Status: Patch Available  (was: Open)

> Ozone: Support CopyContainer
> 
>
> Key: HDDS-75
> URL: https://issues.apache.org/jira/browse/HDDS-75
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Anu Engineer
>Assignee: Elek, Marton
>Priority: Major
>  Labels: OzonePostMerge
> Attachments: HDDS-75.005.patch, HDFS-11686-HDFS-7240.001.patch, 
> HDFS-11686-HDFS-7240.002.patch, HDFS-11686-HDFS-7240.003.patch, 
> HDFS-11686-HDFS-7240.004.patch
>
>
> Once a container is closed we need to copy the container to the correct pool 
> or re-encode the container to use erasure coding. The copyContainer allows 
> users to get the container as a tarball from the remote machine.
> The copyContainer is a basic step to move the raw container data from one 
> datanode to an other node. It could be used by higher level components such 
> like the scm which ensures that the replication rules are satisfied.
> The CopyContainer by default works in pull model: the destination datanode 
> could read the raw data from one or more source datanode where the container 
> exists.
> The source provides a binary representation of the container over a common 
> interface which has two method:
>  # prepare(containerName)
>  # copyData(String containerName, OutputStream destination)
> Prepare phase is called right after the closing event and the implementation 
> could prepare for the copy by precreate a compressed tar file from the 
> container data. As a first step we can provide a simple implementation which 
> creates the tar files on demand.
> The destination datanode should retry the copy if the container in the source 
> node not yet prepared.
> The raw container data is provided over HTTP. The HTTP endpoint should be 
> separated from the ObjectStore  REST API (similar to the distinctions between 
> HDFS-7240 and HDFS-13074) 
> Long-term the HTTP endpoint should support Http-Range requests: One container 
> could be copied from multiple source by the destination. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13236) Standby NN down with error encountered while tailing edits

2018-07-03 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531436#comment-16531436
 ] 

Kihwal Lee edited comment on HDFS-13236 at 7/3/18 1:58 PM:
---

The "restart fails after upgrade" issue is being addressed in HDFS-13596. 
Workaround is to do "saveNamespace" against the active NN after an upgrade from 
2.x to 3.x. The Standby NN will need to be re-bootstrapped.


was (Author: kihwal):
The restart after upgrade issue is being addressed in HDFS-13596. 

> Standby NN down with error encountered while tailing edits
> --
>
> Key: HDFS-13236
> URL: https://issues.apache.org/jira/browse/HDFS-13236
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node, namenode
>Affects Versions: 3.0.0
>Reporter: Yuriy Malygin
>Priority: Major
>
> After update Hadoop from 2.7.3 to 3.0.0 standby NN down with error 
> encountered while tailing edits from JN:
> {code:java}
> Feb 28 01:58:31 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:31,594 
> INFO [FSImageSaver for /one/hadoop-data/dfs of type IMAGE_AND_EDITS] 
> FSImageFormatProtobuf - Image file 
> /one/hadoop-data/dfs/current/fsimage.ckpt_012748979
> 98 of size 4595971949 bytes saved in 93 seconds.
> Feb 28 01:58:33 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:33,445 
> INFO [Standby State Checkpointer] NNStorageRetentionManager - Going to retain 
> 2 images with txid >= 1274897935
> Feb 28 01:58:33 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:33,445 
> INFO [Standby State Checkpointer] NNStorageRetentionManager - Purging old 
> image 
> FSImageFile(file=/one/hadoop-data/dfs/current/fsimage_01274897875, 
> cpktTxId
> =01274897875)
> Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:34,660 
> INFO [Edit log tailer] FSImage - Reading 
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@6a168e6f 
> expecting start txid #1274897999
> Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:34,660 
> INFO [Edit log tailer] FSImage - Start loading edits file 
> http://srvd87.local:8480/getJournal?jid=datalab-hadoop-backup=1274897999=-64%3A10
> 56233980%3A0%3ACID-1fba08aa-c8bd-4217-aef5-6ed206893848=true, 
> http://srve2916.local:8480/getJournal?jid=datalab-hadoop-backup=1274897999=-64%3A1056233980%3A0%3ACID-1fba08aa-c8bd-4217-aef5-6ed206893848;
> inProgressOk=true
> Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:34,661 
> INFO [Edit log tailer] RedundantEditLogInputStream - Fast-forwarding stream 
> 'http://srvd87.local:8480/getJournal?jid=datalab-hadoop-backup=1274897999
> torageInfo=-64%3A1056233980%3A0%3ACID-1fba08aa-c8bd-4217-aef5-6ed206893848=true,
>  
> http://srve2916.local:8480/getJournal?jid=datalab-hadoop-backup=1274897999=-64%3A1056233980%3A0%3ACID-1fba08aa-c8bd-4217
> -aef5-6ed206893848=true' to transaction ID 1274897999
> Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:34,661 
> INFO [Edit log tailer] RedundantEditLogInputStream - Fast-forwarding stream 
> 'http://srvd87.local:8480/getJournal?jid=datalab-hadoop-backup=1274897999=-64%3A1056233980%3A0%3ACID-1fba08aa-c8bd-4217-aef5-6ed206893848=true'
>  to transaction ID 1274897999
> Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:34,680 
> ERROR [Edit log tailer] FSEditLogLoader - Encountered exception on operation 
> AddOp [length=0, inodeId=145550319, 
> path=/kafka/parquet/infrastructureGrace/date=2018-02-28/_temporary/1/_temporary/attempt_1516181147167_20856_r_98_0/part-r-00098.gz.parquet,
>  replication=3, mtime=1519772206615, atime=1519772206615, 
> blockSize=134217728, blocks=[], permissions=root:supergroup:rw-r--r--, 
> aclEntries=null, 
> clientName=DFSClient_attempt_1516181147167_20856_r_98_0_1523538799_1, 
> clientMachine=10.137.2.142, overwrite=false, RpcClientId=, 
> RpcCallId=271996603, storagePolicyId=0, erasureCodingPolicyId=0, 
> opCode=OP_ADD, txid=1274898002]
> Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: 
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
> Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
> Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
> org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
> Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
> org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
> Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
> Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
> Feb 28 01:58:34 srvd2135 

[jira] [Commented] (HDDS-181) CloseContainer should commit all pending open Keys on a dataode

2018-07-03 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531445#comment-16531445
 ] 

Shashikant Banerjee commented on HDDS-181:
--

patch v1 patch is dependent on HDDS-48. Will be submitting it once once HDDS-48 
gets merged to trunk.

> CloseContainer should commit all pending open Keys on a dataode
> ---
>
> Key: HDDS-181
> URL: https://issues.apache.org/jira/browse/HDDS-181
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-181.01.patch
>
>
> A close container command arrives in the Datanode by the SCM heartBeat 
> response.It will then be queued up over the ratis pipeline. Once the command 
> execution starts inside the Datanode, it will mark the container in CLOSING 
> State. All the pending open keys for the container now will be committed 
> followed by the transition of the container state from CLOSING to CLOSED. For 
> achieving this, all the open keys for a container need to be tracked.
> This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2018-07-03 Thread Kihwal Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-13596:
--
Target Version/s: 3.2.0, 3.1.1, 3.0.4

> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Priority: Blocker
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
> 2018-05-17 19:10:06,522 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: java.lang.IllegalStateException: Cannot skip to less 
> than the current value (=16389), where newValue=16388
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resetLastInodeId(FSDirectory.java:1945)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:298)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at 

[jira] [Updated] (HDDS-181) CloseContainer should commit all pending open Keys on a dataode

2018-07-03 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-181:
-
Attachment: HDDS-181.01.patch

> CloseContainer should commit all pending open Keys on a dataode
> ---
>
> Key: HDDS-181
> URL: https://issues.apache.org/jira/browse/HDDS-181
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-181.01.patch
>
>
> A close container command arrives in the Datanode by the SCM heartBeat 
> response.It will then be queued up over the ratis pipeline. Once the command 
> execution starts inside the Datanode, it will mark the container in CLOSING 
> State. All the pending open keys for the container now will be committed 
> followed by the transition of the container state from CLOSING to CLOSED. For 
> achieving this, all the open keys for a container need to be tracked.
> This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13236) Standby NN down with error encountered while tailing edits

2018-07-03 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531436#comment-16531436
 ] 

Kihwal Lee commented on HDFS-13236:
---

The restart after upgrade issue is being addressed in HDFS-13596. 

> Standby NN down with error encountered while tailing edits
> --
>
> Key: HDFS-13236
> URL: https://issues.apache.org/jira/browse/HDFS-13236
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node, namenode
>Affects Versions: 3.0.0
>Reporter: Yuriy Malygin
>Priority: Major
>
> After update Hadoop from 2.7.3 to 3.0.0 standby NN down with error 
> encountered while tailing edits from JN:
> {code:java}
> Feb 28 01:58:31 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:31,594 
> INFO [FSImageSaver for /one/hadoop-data/dfs of type IMAGE_AND_EDITS] 
> FSImageFormatProtobuf - Image file 
> /one/hadoop-data/dfs/current/fsimage.ckpt_012748979
> 98 of size 4595971949 bytes saved in 93 seconds.
> Feb 28 01:58:33 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:33,445 
> INFO [Standby State Checkpointer] NNStorageRetentionManager - Going to retain 
> 2 images with txid >= 1274897935
> Feb 28 01:58:33 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:33,445 
> INFO [Standby State Checkpointer] NNStorageRetentionManager - Purging old 
> image 
> FSImageFile(file=/one/hadoop-data/dfs/current/fsimage_01274897875, 
> cpktTxId
> =01274897875)
> Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:34,660 
> INFO [Edit log tailer] FSImage - Reading 
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@6a168e6f 
> expecting start txid #1274897999
> Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:34,660 
> INFO [Edit log tailer] FSImage - Start loading edits file 
> http://srvd87.local:8480/getJournal?jid=datalab-hadoop-backup=1274897999=-64%3A10
> 56233980%3A0%3ACID-1fba08aa-c8bd-4217-aef5-6ed206893848=true, 
> http://srve2916.local:8480/getJournal?jid=datalab-hadoop-backup=1274897999=-64%3A1056233980%3A0%3ACID-1fba08aa-c8bd-4217-aef5-6ed206893848;
> inProgressOk=true
> Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:34,661 
> INFO [Edit log tailer] RedundantEditLogInputStream - Fast-forwarding stream 
> 'http://srvd87.local:8480/getJournal?jid=datalab-hadoop-backup=1274897999
> torageInfo=-64%3A1056233980%3A0%3ACID-1fba08aa-c8bd-4217-aef5-6ed206893848=true,
>  
> http://srve2916.local:8480/getJournal?jid=datalab-hadoop-backup=1274897999=-64%3A1056233980%3A0%3ACID-1fba08aa-c8bd-4217
> -aef5-6ed206893848=true' to transaction ID 1274897999
> Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:34,661 
> INFO [Edit log tailer] RedundantEditLogInputStream - Fast-forwarding stream 
> 'http://srvd87.local:8480/getJournal?jid=datalab-hadoop-backup=1274897999=-64%3A1056233980%3A0%3ACID-1fba08aa-c8bd-4217-aef5-6ed206893848=true'
>  to transaction ID 1274897999
> Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: 2018-02-28 01:58:34,680 
> ERROR [Edit log tailer] FSEditLogLoader - Encountered exception on operation 
> AddOp [length=0, inodeId=145550319, 
> path=/kafka/parquet/infrastructureGrace/date=2018-02-28/_temporary/1/_temporary/attempt_1516181147167_20856_r_98_0/part-r-00098.gz.parquet,
>  replication=3, mtime=1519772206615, atime=1519772206615, 
> blockSize=134217728, blocks=[], permissions=root:supergroup:rw-r--r--, 
> aclEntries=null, 
> clientName=DFSClient_attempt_1516181147167_20856_r_98_0_1523538799_1, 
> clientMachine=10.137.2.142, overwrite=false, RpcClientId=, 
> RpcCallId=271996603, storagePolicyId=0, erasureCodingPolicyId=0, 
> opCode=OP_ADD, txid=1274898002]
> Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: 
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
> Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
> Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
> org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
> Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
> org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
> Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
> Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
> Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:946)
> Feb 28 01:58:34 srvd2135 datalab-namenode[15566]: at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
> Feb 

[jira] [Commented] (HDDS-203) Add getCommittedBlockLength API in datanode

2018-07-03 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531437#comment-16531437
 ] 

Shashikant Banerjee commented on HDDS-203:
--

Patch v1 is dependent on HDDS-48. Will be submitting it once HDDS-48 gets 
merged to trunk.

> Add getCommittedBlockLength API in datanode
> ---
>
> Key: HDDS-203
> URL: https://issues.apache.org/jira/browse/HDDS-203
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-203.00.patch, HDDS-203.01.patch
>
>
> When a container gets closed on the Datanode while the active Writes are 
> happening
> by OzoneClient, Client Write requests will fail with 
> ContainerClosedException. In such case,
> ozone Client needs to enquire the last committed block length from dataNodes 
> and update the
> OzoneMaster with the updated length for the block. This Jira proposes to add 
> to RPC call to get the last committed length of a block on a Datanode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-203) Add getCommittedBlockLength API in datanode

2018-07-03 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-203:
-
Attachment: HDDS-203.01.patch

> Add getCommittedBlockLength API in datanode
> ---
>
> Key: HDDS-203
> URL: https://issues.apache.org/jira/browse/HDDS-203
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-203.00.patch, HDDS-203.01.patch
>
>
> When a container gets closed on the Datanode while the active Writes are 
> happening
> by OzoneClient, Client Write requests will fail with 
> ContainerClosedException. In such case,
> ozone Client needs to enquire the last committed block length from dataNodes 
> and update the
> OzoneMaster with the updated length for the block. This Jira proposes to add 
> to RPC call to get the last committed length of a block on a Datanode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-203) Add getCommittedBlockLength API in datanode

2018-07-03 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-203:
-
Status: Open  (was: Patch Available)

> Add getCommittedBlockLength API in datanode
> ---
>
> Key: HDDS-203
> URL: https://issues.apache.org/jira/browse/HDDS-203
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-203.00.patch, HDDS-203.01.patch
>
>
> When a container gets closed on the Datanode while the active Writes are 
> happening
> by OzoneClient, Client Write requests will fail with 
> ContainerClosedException. In such case,
> ozone Client needs to enquire the last committed block length from dataNodes 
> and update the
> OzoneMaster with the updated length for the block. This Jira proposes to add 
> to RPC call to get the last committed length of a block on a Datanode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13672) clearCorruptLazyPersistFiles could crash NameNode

2018-07-03 Thread Andrew Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531431#comment-16531431
 ] 

Andrew Wang commented on HDFS-13672:


Hi Gabor, to respond to your question, I think the very common case will be 
zero lazy persist files, the rare case being "some" (~thousands), and the very 
very rare case lots (~millions).

I agree that holding the lock for a long time is an anti-pattern. I actually 
had a patch I was working on a while ago for a different feature that added a 
safe way of iterating over the block map.

However, for this case I don't know if it's worth spending a lot of time 
optimizing, since the # of corrupt blocks in the system is normally not that 
large. It's a rare situation that a NameNode is transitioned to active while 
missing a lot of DNs like this (and why we have startup safemode checks). This 
probably only happens during debugging, in which case we could also solve this 
problem by setting the scrubber interval to 0 to disable it.

[~jojochuang] what do you think?

> clearCorruptLazyPersistFiles could crash NameNode
> -
>
> Key: HDFS-13672
> URL: https://issues.apache.org/jira/browse/HDFS-13672
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HDFS-13672.001.patch, HDFS-13672.002.patch
>
>
> I started a NameNode on a pretty large fsimage. Since the NameNode is started 
> without any DataNodes, all blocks (100 million) are "corrupt".
> Afterwards I observed FSNamesystem#clearCorruptLazyPersistFiles() held write 
> lock for a long time:
> {noformat}
> 18/06/12 12:37:03 INFO namenode.FSNamesystem: FSNamesystem write lock held 
> for 46024 ms via
> java.lang.Thread.getStackTrace(Thread.java:1559)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystemLock.writeUnlock(FSNamesystemLock.java:198)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1689)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.clearCorruptLazyPersistFiles(FSNamesystem.java:5532)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:5543)
> java.lang.Thread.run(Thread.java:748)
> Number of suppressed write-lock reports: 0
> Longest write-lock held interval: 46024
> {noformat}
> Here's the relevant code:
> {code}
>   writeLock();
>   try {
> final Iterator it =
> blockManager.getCorruptReplicaBlockIterator();
> while (it.hasNext()) {
>   Block b = it.next();
>   BlockInfo blockInfo = blockManager.getStoredBlock(b);
>   if (blockInfo.getBlockCollection().getStoragePolicyID() == 
> lpPolicy.getId()) {
> filesToDelete.add(blockInfo.getBlockCollection());
>   }
> }
> for (BlockCollection bc : filesToDelete) {
>   LOG.warn("Removing lazyPersist file " + bc.getName() + " with no 
> replicas.");
>   changed |= deleteInternal(bc.getName(), false, false, false);
> }
>   } finally {
> writeUnlock();
>   }
> {code}
> In essence, the iteration over corrupt replica list should be broken down 
> into smaller iterations to avoid a single long wait.
> Since this operation holds NameNode write lock for more than 45 seconds, the 
> default ZKFC connection timeout, it implies an extreme case like this (100 
> million corrupt blocks) could lead to NameNode failover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13658) fsck, dfsadmin -report, and NN WebUI should report number of blocks that have 1 replica

2018-07-03 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531423#comment-16531423
 ] 

genericqa commented on HDFS-13658:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 28m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
24s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
38s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
17s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}241m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.namenode.TestReencryption |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13658 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930096/HDFS-13658.007.patch |
| Optional Tests |  asflicense  mvnsite  compile  javac  javadoc  

[jira] [Updated] (HDFS-13716) hdfs.DFSclient should log KMS DT acquisition at INFO level

2018-07-03 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HDFS-13716:

Status: Patch Available  (was: Open)

> hdfs.DFSclient should log KMS DT acquisition at INFO level
> --
>
> Key: HDFS-13716
> URL: https://issues.apache.org/jira/browse/HDFS-13716
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, hdfs-client, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Minor
> Attachments: HDFS-13716.001.patch
>
>
> We can see HDFS and Hive delegation token (DT) creation as INFO messages in 
> Spark application logs but not for KMS DTs:
> 18/06/07 10:02:35 INFO hdfs.DFSClient: Created token for admin: 
> HDFS_DELEGATION_TOKEN owner=ad...@example.net, renewer=yarn, realUser=, 
> issueDate=1528390955760, maxDate=1528995755760, sequenceNumber=125659, 
> masterKeyId=795 on ha-hdfs:dev
> 18/06/07 10:02:37 INFO hive.metastore: Trying to connect to metastore with 
> URI thrift://hostnam.example.net:9083
> 18/06/07 10:02:37 INFO hive.metastore: Opened a connection to metastore, 
> current connections: 1
> 18/06/07 10:02:37 INFO hive.metastore: Connected to metastore.
> 18/06/07 10:02:37 INFO security.HiveCredentialProvider: Get Token from hive 
> metastore: Kind: HIVE_DELEGATION_TOKEN, Service: , Ident: 00 1b 61 6e 69 73 
> 68 2d 61 64 6d 69 6e 40 43 4f 52 50 2e 49 4e 54 55 49 54 2e 4e 45 54 04 68 69 
> 76 65 00 8a 01 63 db 33 3a 83 8a 01 63 ff 3f be 83 8e 17 8d 8e 06 96
> Please implement KMS DT acquisition events at INFO level as it will help 
> supportability of encrypted HDSF filesystems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13716) hdfs.DFSclient should log KMS DT acquisition at INFO level

2018-07-03 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HDFS-13716:

Attachment: HDFS-13716.001.patch

> hdfs.DFSclient should log KMS DT acquisition at INFO level
> --
>
> Key: HDFS-13716
> URL: https://issues.apache.org/jira/browse/HDFS-13716
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, hdfs-client, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Minor
> Attachments: HDFS-13716.001.patch
>
>
> We can see HDFS and Hive delegation token (DT) creation as INFO messages in 
> Spark application logs but not for KMS DTs:
> 18/06/07 10:02:35 INFO hdfs.DFSClient: Created token for admin: 
> HDFS_DELEGATION_TOKEN owner=ad...@example.net, renewer=yarn, realUser=, 
> issueDate=1528390955760, maxDate=1528995755760, sequenceNumber=125659, 
> masterKeyId=795 on ha-hdfs:dev
> 18/06/07 10:02:37 INFO hive.metastore: Trying to connect to metastore with 
> URI thrift://hostnam.example.net:9083
> 18/06/07 10:02:37 INFO hive.metastore: Opened a connection to metastore, 
> current connections: 1
> 18/06/07 10:02:37 INFO hive.metastore: Connected to metastore.
> 18/06/07 10:02:37 INFO security.HiveCredentialProvider: Get Token from hive 
> metastore: Kind: HIVE_DELEGATION_TOKEN, Service: , Ident: 00 1b 61 6e 69 73 
> 68 2d 61 64 6d 69 6e 40 43 4f 52 50 2e 49 4e 54 55 49 54 2e 4e 45 54 04 68 69 
> 76 65 00 8a 01 63 db 33 3a 83 8a 01 63 ff 3f be 83 8e 17 8d 8e 06 96
> Please implement KMS DT acquisition events at INFO level as it will help 
> supportability of encrypted HDSF filesystems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13672) clearCorruptLazyPersistFiles could crash NameNode

2018-07-03 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HDFS-13672:
--
Status: In Progress  (was: Patch Available)

> clearCorruptLazyPersistFiles could crash NameNode
> -
>
> Key: HDFS-13672
> URL: https://issues.apache.org/jira/browse/HDFS-13672
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HDFS-13672.001.patch, HDFS-13672.002.patch
>
>
> I started a NameNode on a pretty large fsimage. Since the NameNode is started 
> without any DataNodes, all blocks (100 million) are "corrupt".
> Afterwards I observed FSNamesystem#clearCorruptLazyPersistFiles() held write 
> lock for a long time:
> {noformat}
> 18/06/12 12:37:03 INFO namenode.FSNamesystem: FSNamesystem write lock held 
> for 46024 ms via
> java.lang.Thread.getStackTrace(Thread.java:1559)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystemLock.writeUnlock(FSNamesystemLock.java:198)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1689)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.clearCorruptLazyPersistFiles(FSNamesystem.java:5532)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:5543)
> java.lang.Thread.run(Thread.java:748)
> Number of suppressed write-lock reports: 0
> Longest write-lock held interval: 46024
> {noformat}
> Here's the relevant code:
> {code}
>   writeLock();
>   try {
> final Iterator it =
> blockManager.getCorruptReplicaBlockIterator();
> while (it.hasNext()) {
>   Block b = it.next();
>   BlockInfo blockInfo = blockManager.getStoredBlock(b);
>   if (blockInfo.getBlockCollection().getStoragePolicyID() == 
> lpPolicy.getId()) {
> filesToDelete.add(blockInfo.getBlockCollection());
>   }
> }
> for (BlockCollection bc : filesToDelete) {
>   LOG.warn("Removing lazyPersist file " + bc.getName() + " with no 
> replicas.");
>   changed |= deleteInternal(bc.getName(), false, false, false);
> }
>   } finally {
> writeUnlock();
>   }
> {code}
> In essence, the iteration over corrupt replica list should be broken down 
> into smaller iterations to avoid a single long wait.
> Since this operation holds NameNode write lock for more than 45 seconds, the 
> default ZKFC connection timeout, it implies an extreme case like this (100 
> million corrupt blocks) could lead to NameNode failover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13672) clearCorruptLazyPersistFiles could crash NameNode

2018-07-03 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531389#comment-16531389
 ] 

Gabor Bota commented on HDFS-13672:
---

Thanks for the review [~andrew.wang]!

Based on what you've said that can't happen *or* it's not likely to happen to 
have that many corrupt lazy persist files? It's a really good idea to have a 
counter on this, but I still feel like we need to handle the case when a 
long-running operation like this can time out and crash the namenode.

In my next patch I will
* check if there's a counter for the lazy persist files, and if there's not, I 
will create one. 
* modify the clearCorruptLazyPersistFiles to re-create the iterator after every 
lock release
* fix the typos (sorry for that)
* add the config key documentation to hdfs-default.xml

> clearCorruptLazyPersistFiles could crash NameNode
> -
>
> Key: HDFS-13672
> URL: https://issues.apache.org/jira/browse/HDFS-13672
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HDFS-13672.001.patch, HDFS-13672.002.patch
>
>
> I started a NameNode on a pretty large fsimage. Since the NameNode is started 
> without any DataNodes, all blocks (100 million) are "corrupt".
> Afterwards I observed FSNamesystem#clearCorruptLazyPersistFiles() held write 
> lock for a long time:
> {noformat}
> 18/06/12 12:37:03 INFO namenode.FSNamesystem: FSNamesystem write lock held 
> for 46024 ms via
> java.lang.Thread.getStackTrace(Thread.java:1559)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystemLock.writeUnlock(FSNamesystemLock.java:198)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1689)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.clearCorruptLazyPersistFiles(FSNamesystem.java:5532)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:5543)
> java.lang.Thread.run(Thread.java:748)
> Number of suppressed write-lock reports: 0
> Longest write-lock held interval: 46024
> {noformat}
> Here's the relevant code:
> {code}
>   writeLock();
>   try {
> final Iterator it =
> blockManager.getCorruptReplicaBlockIterator();
> while (it.hasNext()) {
>   Block b = it.next();
>   BlockInfo blockInfo = blockManager.getStoredBlock(b);
>   if (blockInfo.getBlockCollection().getStoragePolicyID() == 
> lpPolicy.getId()) {
> filesToDelete.add(blockInfo.getBlockCollection());
>   }
> }
> for (BlockCollection bc : filesToDelete) {
>   LOG.warn("Removing lazyPersist file " + bc.getName() + " with no 
> replicas.");
>   changed |= deleteInternal(bc.getName(), false, false, false);
> }
>   } finally {
> writeUnlock();
>   }
> {code}
> In essence, the iteration over corrupt replica list should be broken down 
> into smaller iterations to avoid a single long wait.
> Since this operation holds NameNode write lock for more than 45 seconds, the 
> default ZKFC connection timeout, it implies an extreme case like this (100 
> million corrupt blocks) could lead to NameNode failover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13672) clearCorruptLazyPersistFiles could crash NameNode

2018-07-03 Thread Andrew Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531371#comment-16531371
 ] 

Andrew Wang commented on HDFS-13672:


Hi Gabor, thanks for working on this,

I don't think it's thread safe to drop the lock while holding onto an iterator 
like this. This is a LinkedSetIterator and will throw a 
ConcurrentModificationException if the set is changed underneath it. We need a 
way to safely resume at a mid-point, and that seems a bit hard with 
LinkedSetIterator as it is.

Since I think the common case here is that there are zero lazy persist files, a 
better (though different) change would be to skip running this scrubber 
entirely if there aren't any lazy persist files. I'm hoping there's an easy way 
to add a counter for this (or some existing way to query if there are any lazy 
persist files).

We also need unit tests for new changes like this. I think you also typo'd the 
config key name with "sec" instead of "millis" or "ms". Config keys also needed 
to be added to hdfs-default.xml with a description for documentation purposes.

> clearCorruptLazyPersistFiles could crash NameNode
> -
>
> Key: HDFS-13672
> URL: https://issues.apache.org/jira/browse/HDFS-13672
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HDFS-13672.001.patch, HDFS-13672.002.patch
>
>
> I started a NameNode on a pretty large fsimage. Since the NameNode is started 
> without any DataNodes, all blocks (100 million) are "corrupt".
> Afterwards I observed FSNamesystem#clearCorruptLazyPersistFiles() held write 
> lock for a long time:
> {noformat}
> 18/06/12 12:37:03 INFO namenode.FSNamesystem: FSNamesystem write lock held 
> for 46024 ms via
> java.lang.Thread.getStackTrace(Thread.java:1559)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystemLock.writeUnlock(FSNamesystemLock.java:198)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1689)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.clearCorruptLazyPersistFiles(FSNamesystem.java:5532)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:5543)
> java.lang.Thread.run(Thread.java:748)
> Number of suppressed write-lock reports: 0
> Longest write-lock held interval: 46024
> {noformat}
> Here's the relevant code:
> {code}
>   writeLock();
>   try {
> final Iterator it =
> blockManager.getCorruptReplicaBlockIterator();
> while (it.hasNext()) {
>   Block b = it.next();
>   BlockInfo blockInfo = blockManager.getStoredBlock(b);
>   if (blockInfo.getBlockCollection().getStoragePolicyID() == 
> lpPolicy.getId()) {
> filesToDelete.add(blockInfo.getBlockCollection());
>   }
> }
> for (BlockCollection bc : filesToDelete) {
>   LOG.warn("Removing lazyPersist file " + bc.getName() + " with no 
> replicas.");
>   changed |= deleteInternal(bc.getName(), false, false, false);
> }
>   } finally {
> writeUnlock();
>   }
> {code}
> In essence, the iteration over corrupt replica list should be broken down 
> into smaller iterations to avoid a single long wait.
> Since this operation holds NameNode write lock for more than 45 seconds, the 
> default ZKFC connection timeout, it implies an extreme case like this (100 
> million corrupt blocks) could lead to NameNode failover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-212) Introduce NodeStateManager to manage the state of Datanodes in SCM

2018-07-03 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-212:


 Summary: Introduce NodeStateManager to manage the state of 
Datanodes in SCM
 Key: HDDS-212
 URL: https://issues.apache.org/jira/browse/HDDS-212
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM
Reporter: Nanda kumar
Assignee: Nanda kumar
 Fix For: 0.2.1


Introducing {{NodeStateManager}} will make the lifecycle management of 
datanodes in SCM easy. NodeStateManager will be responsible for marking the 
datanodes as stale or dead when heartbeat is not received and it will maintain 
the current state of all the datanodes in the cluster. NodeStateManager should 
be the only place we maintain node state information, everyone else should use 
NodeStateManager to know about the state information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13716) hdfs.DFSclient should log KMS DT acquisition at INFO level

2018-07-03 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13716:
---
Description: 
We can see HDFS and Hive delegation token (DT) creation as INFO messages in 
Spark application logs but not for KMS DTs:

18/06/07 10:02:35 INFO hdfs.DFSClient: Created token for admin: 
HDFS_DELEGATION_TOKEN owner=ad...@example.net, renewer=yarn, realUser=, 
issueDate=1528390955760, maxDate=1528995755760, sequenceNumber=125659, 
masterKeyId=795 on ha-hdfs:dev
18/06/07 10:02:37 INFO hive.metastore: Trying to connect to metastore with URI 
thrift://hostnam.example.net:9083
18/06/07 10:02:37 INFO hive.metastore: Opened a connection to metastore, 
current connections: 1
18/06/07 10:02:37 INFO hive.metastore: Connected to metastore.
18/06/07 10:02:37 INFO security.HiveCredentialProvider: Get Token from hive 
metastore: Kind: HIVE_DELEGATION_TOKEN, Service: , Ident: 00 1b 61 6e 69 73 68 
2d 61 64 6d 69 6e 40 43 4f 52 50 2e 49 4e 54 55 49 54 2e 4e 45 54 04 68 69 76 
65 00 8a 01 63 db 33 3a 83 8a 01 63 ff 3f be 83 8e 17 8d 8e 06 96

Please implement KMS DT acquisition events at INFO level as it will help 
supportability of encrypted HDSF filesystems.

  was:
We can see HDFS and Hive delegation token (DT) creation as INFO messages in 
Spark application logs but not for KMS DTs:

18/06/07 10:02:35 INFO hdfs.DFSClient: Created token for anish-admin: 
HDFS_DELEGATION_TOKEN owner=ad...@example.net, renewer=yarn, realUser=, 
issueDate=1528390955760, maxDate=1528995755760, sequenceNumber=125659, 
masterKeyId=795 on ha-hdfs:dev
18/06/07 10:02:37 INFO hive.metastore: Trying to connect to metastore with URI 
thrift://hostnam.example.net:9083
18/06/07 10:02:37 INFO hive.metastore: Opened a connection to metastore, 
current connections: 1
18/06/07 10:02:37 INFO hive.metastore: Connected to metastore.
18/06/07 10:02:37 INFO security.HiveCredentialProvider: Get Token from hive 
metastore: Kind: HIVE_DELEGATION_TOKEN, Service: , Ident: 00 1b 61 6e 69 73 68 
2d 61 64 6d 69 6e 40 43 4f 52 50 2e 49 4e 54 55 49 54 2e 4e 45 54 04 68 69 76 
65 00 8a 01 63 db 33 3a 83 8a 01 63 ff 3f be 83 8e 17 8d 8e 06 96

Please implement KMS DT acquisition events at INFO level as it will help 
supportability of encrypted HDSF filesystems.


> hdfs.DFSclient should log KMS DT acquisition at INFO level
> --
>
> Key: HDFS-13716
> URL: https://issues.apache.org/jira/browse/HDFS-13716
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, hdfs-client, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Minor
>
> We can see HDFS and Hive delegation token (DT) creation as INFO messages in 
> Spark application logs but not for KMS DTs:
> 18/06/07 10:02:35 INFO hdfs.DFSClient: Created token for admin: 
> HDFS_DELEGATION_TOKEN owner=ad...@example.net, renewer=yarn, realUser=, 
> issueDate=1528390955760, maxDate=1528995755760, sequenceNumber=125659, 
> masterKeyId=795 on ha-hdfs:dev
> 18/06/07 10:02:37 INFO hive.metastore: Trying to connect to metastore with 
> URI thrift://hostnam.example.net:9083
> 18/06/07 10:02:37 INFO hive.metastore: Opened a connection to metastore, 
> current connections: 1
> 18/06/07 10:02:37 INFO hive.metastore: Connected to metastore.
> 18/06/07 10:02:37 INFO security.HiveCredentialProvider: Get Token from hive 
> metastore: Kind: HIVE_DELEGATION_TOKEN, Service: , Ident: 00 1b 61 6e 69 73 
> 68 2d 61 64 6d 69 6e 40 43 4f 52 50 2e 49 4e 54 55 49 54 2e 4e 45 54 04 68 69 
> 76 65 00 8a 01 63 db 33 3a 83 8a 01 63 ff 3f be 83 8e 17 8d 8e 06 96
> Please implement KMS DT acquisition events at INFO level as it will help 
> supportability of encrypted HDSF filesystems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13716) hdfs.DFSclient should log KMS DT acquisition at INFO level

2018-07-03 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13716:
---
Description: 
We can see HDFS and Hive delegation token (DT) creation as INFO messages in 
Spark application logs but not for KMS DTs:

18/06/07 10:02:35 INFO hdfs.DFSClient: Created token for anish-admin: 
HDFS_DELEGATION_TOKEN owner=ad...@example.net, renewer=yarn, realUser=, 
issueDate=1528390955760, maxDate=1528995755760, sequenceNumber=125659, 
masterKeyId=795 on ha-hdfs:dev
18/06/07 10:02:37 INFO hive.metastore: Trying to connect to metastore with URI 
thrift://hostnam.example.net:9083
18/06/07 10:02:37 INFO hive.metastore: Opened a connection to metastore, 
current connections: 1
18/06/07 10:02:37 INFO hive.metastore: Connected to metastore.
18/06/07 10:02:37 INFO security.HiveCredentialProvider: Get Token from hive 
metastore: Kind: HIVE_DELEGATION_TOKEN, Service: , Ident: 00 1b 61 6e 69 73 68 
2d 61 64 6d 69 6e 40 43 4f 52 50 2e 49 4e 54 55 49 54 2e 4e 45 54 04 68 69 76 
65 00 8a 01 63 db 33 3a 83 8a 01 63 ff 3f be 83 8e 17 8d 8e 06 96

Please implement KMS DT acquisition events at INFO level as it will help 
supportability of encrypted HDSF filesystems.

  was:
We can see HDFS and Hive delegation token (DT) creation as INFO messages in 
Spark application logs but not for KMS DTs:

18/06/07 10:02:35 INFO hdfs.DFSClient: Created token for anish-admin: 
HDFS_DELEGATION_TOKEN owner=anish-ad...@corp.intuit.net, renewer=yarn, 
realUser=, issueDate=1528390955760, maxDate=1528995755760, 
sequenceNumber=125659, masterKeyId=795 on ha-hdfs:dev
18/06/07 10:02:37 INFO hive.metastore: Trying to connect to metastore with URI 
thrift://pdevhdphmc01.corp.intuit.net:9083
18/06/07 10:02:37 INFO hive.metastore: Opened a connection to metastore, 
current connections: 1
18/06/07 10:02:37 INFO hive.metastore: Connected to metastore.
18/06/07 10:02:37 INFO security.HiveCredentialProvider: Get Token from hive 
metastore: Kind: HIVE_DELEGATION_TOKEN, Service: , Ident: 00 1b 61 6e 69 73 68 
2d 61 64 6d 69 6e 40 43 4f 52 50 2e 49 4e 54 55 49 54 2e 4e 45 54 04 68 69 76 
65 00 8a 01 63 db 33 3a 83 8a 01 63 ff 3f be 83 8e 17 8d 8e 06 96

Please implement KMS DT acquisition events at INFO level as it will help 
supportability of encrypted HDSF filesystems.


> hdfs.DFSclient should log KMS DT acquisition at INFO level
> --
>
> Key: HDFS-13716
> URL: https://issues.apache.org/jira/browse/HDFS-13716
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, hdfs-client, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Minor
>
> We can see HDFS and Hive delegation token (DT) creation as INFO messages in 
> Spark application logs but not for KMS DTs:
> 18/06/07 10:02:35 INFO hdfs.DFSClient: Created token for anish-admin: 
> HDFS_DELEGATION_TOKEN owner=ad...@example.net, renewer=yarn, realUser=, 
> issueDate=1528390955760, maxDate=1528995755760, sequenceNumber=125659, 
> masterKeyId=795 on ha-hdfs:dev
> 18/06/07 10:02:37 INFO hive.metastore: Trying to connect to metastore with 
> URI thrift://hostnam.example.net:9083
> 18/06/07 10:02:37 INFO hive.metastore: Opened a connection to metastore, 
> current connections: 1
> 18/06/07 10:02:37 INFO hive.metastore: Connected to metastore.
> 18/06/07 10:02:37 INFO security.HiveCredentialProvider: Get Token from hive 
> metastore: Kind: HIVE_DELEGATION_TOKEN, Service: , Ident: 00 1b 61 6e 69 73 
> 68 2d 61 64 6d 69 6e 40 43 4f 52 50 2e 49 4e 54 55 49 54 2e 4e 45 54 04 68 69 
> 76 65 00 8a 01 63 db 33 3a 83 8a 01 63 ff 3f be 83 8e 17 8d 8e 06 96
> Please implement KMS DT acquisition events at INFO level as it will help 
> supportability of encrypted HDSF filesystems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-75) Ozone: Support CopyContainer

2018-07-03 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-75?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531337#comment-16531337
 ] 

Elek, Marton commented on HDDS-75:
--

Patch is rebased, now it's compatible with trunk. This is the first version 
some minor modifications are still required (eg. remove unused fields from the 
proto files)

This version uses the Grpc server of the GrpcXceiverService server.

The patch doesn't contain any throttling or progress tracking. It could be done 
in a separated Jira.
The patch doesn't support offsets (currently). I will simplify the 
communication with removing len/offset from the remote.

Please move it to "Patch Available" state (I have no permission to do this)

> Ozone: Support CopyContainer
> 
>
> Key: HDDS-75
> URL: https://issues.apache.org/jira/browse/HDDS-75
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Anu Engineer
>Assignee: Elek, Marton
>Priority: Major
>  Labels: OzonePostMerge
> Attachments: HDDS-75.005.patch, HDFS-11686-HDFS-7240.001.patch, 
> HDFS-11686-HDFS-7240.002.patch, HDFS-11686-HDFS-7240.003.patch, 
> HDFS-11686-HDFS-7240.004.patch
>
>
> Once a container is closed we need to copy the container to the correct pool 
> or re-encode the container to use erasure coding. The copyContainer allows 
> users to get the container as a tarball from the remote machine.
> The copyContainer is a basic step to move the raw container data from one 
> datanode to an other node. It could be used by higher level components such 
> like the scm which ensures that the replication rules are satisfied.
> The CopyContainer by default works in pull model: the destination datanode 
> could read the raw data from one or more source datanode where the container 
> exists.
> The source provides a binary representation of the container over a common 
> interface which has two method:
>  # prepare(containerName)
>  # copyData(String containerName, OutputStream destination)
> Prepare phase is called right after the closing event and the implementation 
> could prepare for the copy by precreate a compressed tar file from the 
> container data. As a first step we can provide a simple implementation which 
> creates the tar files on demand.
> The destination datanode should retry the copy if the container in the source 
> node not yet prepared.
> The raw container data is provided over HTTP. The HTTP endpoint should be 
> separated from the ObjectStore  REST API (similar to the distinctions between 
> HDFS-7240 and HDFS-13074) 
> Long-term the HTTP endpoint should support Http-Range requests: One container 
> could be copied from multiple source by the destination. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-75) Ozone: Support CopyContainer

2018-07-03 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-75?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-75:
-
Attachment: (was: HDFS-11686-HDFS-7240.wip.patch)

> Ozone: Support CopyContainer
> 
>
> Key: HDDS-75
> URL: https://issues.apache.org/jira/browse/HDDS-75
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Anu Engineer
>Assignee: Elek, Marton
>Priority: Major
>  Labels: OzonePostMerge
> Attachments: HDDS-75.005.patch, HDFS-11686-HDFS-7240.001.patch, 
> HDFS-11686-HDFS-7240.002.patch, HDFS-11686-HDFS-7240.003.patch, 
> HDFS-11686-HDFS-7240.004.patch
>
>
> Once a container is closed we need to copy the container to the correct pool 
> or re-encode the container to use erasure coding. The copyContainer allows 
> users to get the container as a tarball from the remote machine.
> The copyContainer is a basic step to move the raw container data from one 
> datanode to an other node. It could be used by higher level components such 
> like the scm which ensures that the replication rules are satisfied.
> The CopyContainer by default works in pull model: the destination datanode 
> could read the raw data from one or more source datanode where the container 
> exists.
> The source provides a binary representation of the container over a common 
> interface which has two method:
>  # prepare(containerName)
>  # copyData(String containerName, OutputStream destination)
> Prepare phase is called right after the closing event and the implementation 
> could prepare for the copy by precreate a compressed tar file from the 
> container data. As a first step we can provide a simple implementation which 
> creates the tar files on demand.
> The destination datanode should retry the copy if the container in the source 
> node not yet prepared.
> The raw container data is provided over HTTP. The HTTP endpoint should be 
> separated from the ObjectStore  REST API (similar to the distinctions between 
> HDFS-7240 and HDFS-13074) 
> Long-term the HTTP endpoint should support Http-Range requests: One container 
> could be copied from multiple source by the destination. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >