[jira] [Commented] (HDFS-13711) Avoid using timeout datanodes for block replication

2018-06-29 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528541#comment-16528541
 ] 

genericqa commented on HDFS-13711:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
55s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 1 
unchanged - 0 fixed = 2 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}218m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.balancer.TestBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13711 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-13536) [PROVIDED Storage] HA for InMemoryAliasMap

2018-06-29 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528480#comment-16528480
 ] 

Íñigo Goiri commented on HDFS-13536:


[^HDFS-13536.003.patch] LGTM.
The unit tests seem unrelated.
+1

> [PROVIDED Storage] HA for InMemoryAliasMap
> --
>
> Key: HDFS-13536
> URL: https://issues.apache.org/jira/browse/HDFS-13536
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-13536.001.patch, HDFS-13536.002.patch, 
> HDFS-13536.003.patch
>
>
> Provide HA for the {{InMemoryLevelDBAliasMapServer}} to work with HDFS NN 
> configured in high availability. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13536) [PROVIDED Storage] HA for InMemoryAliasMap

2018-06-29 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528460#comment-16528460
 ] 

genericqa commented on HDFS-13536:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
42s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}233m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13536 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929770/HDFS-13536.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| 

[jira] [Updated] (HDDS-205) Add metrics to HddsDispatcher

2018-06-29 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-205:

Fix Version/s: 0.2.1

> Add metrics to HddsDispatcher
> -
>
> Key: HDDS-205
> URL: https://issues.apache.org/jira/browse/HDDS-205
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-205-HDDS-48.00.patch, HDDS-205-HDDS-48.01.patch, 
> HDDS-205-HDDS-48.02.patch
>
>
> This patch adds metrics to newly added HddsDispatcher.
> This uses, already existing ContainerMetrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13711) Avoid using timeout datanodes for block replication

2018-06-29 Thread Anbang Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528418#comment-16528418
 ] 

Anbang Hu commented on HDFS-13711:
--

 [^HDFS-13711.000.patch] is for trunk, adapted from our internal patch. The 
issue we saw with block replication was when the cluster got busy, block 
replication had a lot of retries and had high chance of hitting previously 
timeout nodes if using pure randomization. [~elgoiri] [~lukmajercak] You guys 
have done a lot of work on block replication/placement logic, can you take a 
look?

> Avoid using timeout datanodes for block replication
> ---
>
> Key: HDFS-13711
> URL: https://issues.apache.org/jira/browse/HDFS-13711
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
> Attachments: HDFS-13711.000.patch
>
>
> For block replication, there is randomization in selecting source datanode in 
> {{BlockManager.chooseSourceDatanodes}} to avoid always choosing the same 
> datanode. 
> To reduce replication failure rate further, one option we can do is to 
> remember which datanodes were previously tried on but timed out, next time 
> block replication will choose to try on other datanodes. The list of timeout 
> datanodes should be reset when all datanodes are exhausted. This is just one 
> example of choosing "better" sources. We can easily have other criteria for 
> choosing sources: avoiding high xceiver nodes, etc.. So the improvement 
> should be designed as generic as possible to accept other criteria.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13711) Avoid using timeout datanodes for block replication

2018-06-29 Thread Anbang Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13711:
-
Attachment: HDFS-13711.000.patch
Status: Patch Available  (was: Open)

> Avoid using timeout datanodes for block replication
> ---
>
> Key: HDFS-13711
> URL: https://issues.apache.org/jira/browse/HDFS-13711
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
> Attachments: HDFS-13711.000.patch
>
>
> For block replication, there is randomization in selecting source datanode in 
> {{BlockManager.chooseSourceDatanodes}} to avoid always choosing the same 
> datanode. 
> To reduce replication failure rate further, one option we can do is to 
> remember which datanodes were previously tried on but timed out, next time 
> block replication will choose to try on other datanodes. The list of timeout 
> datanodes should be reset when all datanodes are exhausted. This is just one 
> example of choosing "better" sources. We can easily have other criteria for 
> choosing sources: avoiding high xceiver nodes, etc.. So the improvement 
> should be designed as generic as possible to accept other criteria.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13711) Avoid using timeout datanodes for block replication

2018-06-29 Thread Anbang Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13711:
-
Description: 
For block replication, there is randomization in selecting source datanode in 
{{BlockManager.chooseSourceDatanodes}} to avoid always choosing the same 
datanode. 

To reduce replication failure rate further, one option we can do is to remember 
which datanodes were previously tried on but timed out, next time block 
replication will choose to try on other datanodes. The list of timeout 
datanodes should be reset when all datanodes are exhausted. This is just one 
example of choosing "better" sources. We can easily have other criteria for 
choosing sources: avoiding high xceiver nodes, etc.. So the improvement should 
be designed as generic as possible to accept other criteria.

> Avoid using timeout datanodes for block replication
> ---
>
> Key: HDFS-13711
> URL: https://issues.apache.org/jira/browse/HDFS-13711
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Major
>
> For block replication, there is randomization in selecting source datanode in 
> {{BlockManager.chooseSourceDatanodes}} to avoid always choosing the same 
> datanode. 
> To reduce replication failure rate further, one option we can do is to 
> remember which datanodes were previously tried on but timed out, next time 
> block replication will choose to try on other datanodes. The list of timeout 
> datanodes should be reset when all datanodes are exhausted. This is just one 
> example of choosing "better" sources. We can easily have other criteria for 
> choosing sources: avoiding high xceiver nodes, etc.. So the improvement 
> should be designed as generic as possible to accept other criteria.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13711) Avoid using timeout datanodes for block replication

2018-06-29 Thread Anbang Hu (JIRA)
Anbang Hu created HDFS-13711:


 Summary: Avoid using timeout datanodes for block replication
 Key: HDFS-13711
 URL: https://issues.apache.org/jira/browse/HDFS-13711
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Anbang Hu
Assignee: Anbang Hu






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13706) ClientGCIContext should be correctly named ClientGSIContext

2018-06-29 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-13706:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-12943
   Status: Resolved  (was: Patch Available)

I just committed this.

> ClientGCIContext should be correctly named ClientGSIContext
> ---
>
> Key: HDFS-13706
> URL: https://issues.apache.org/jira/browse/HDFS-13706
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-13706-HDFS-12943.001.patch
>
>
> GSI stands for Global State Id. It's a client-side counterpart of NN's 
> {{GlobalStateIdContext}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13707) [PROVIDED Storage] Fix failing integration tests in ITestProvidedImplementation

2018-06-29 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13707:
---
Fix Version/s: (was: 3.1.1)
   3.2.0

> [PROVIDED Storage] Fix failing integration tests in 
> ITestProvidedImplementation
> ---
>
> Key: HDFS-13707
> URL: https://issues.apache.org/jira/browse/HDFS-13707
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13707.001.patch
>
>
> Many tests in {{ITestProvidedImplementation}} use {{TextFileRegionAliasMap}} 
> as the AliasMap. This stores and retrieves path handles for provided 
> locations using UTF-8 encoding. HDFS-13186 implements the path handle 
> semantics for {{RawLocalFileSystem}} using {{LocalFileSystemPathHandle}}. 
> Storing and retrieving these path handles as UTF-8 strings in 
> {{TextFileRegionAliasMap}} results in improper serialization/deserialization, 
> and fails the associated tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-205) Add metrics to HddsDispatcher

2018-06-29 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528333#comment-16528333
 ] 

genericqa commented on HDDS-205:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
21s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m  
4s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
50s{color} | {color:red} hadoop-hdds/container-service in HDDS-48 has 6 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 30m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
21s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 16s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}144m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
|   | hadoop.ozone.ksm.TestKSMSQLCli |
|   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
|   | hadoop.ozone.scm.TestXceiverClientManager |
|   | hadoop.ozone.TestStorageContainerManager |
|   | hadoop.ozone.freon.TestDataValidate |
|   | 

[jira] [Assigned] (HDFS-13706) ClientGCIContext should be correctly named ClientGSIContext

2018-06-29 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko reassigned HDFS-13706:
--

Assignee: Konstantin Shvachko

> ClientGCIContext should be correctly named ClientGSIContext
> ---
>
> Key: HDFS-13706
> URL: https://issues.apache.org/jira/browse/HDFS-13706
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Attachments: HDFS-13706-HDFS-12943.001.patch
>
>
> GSI stands for Global State Id. It's a client-side counterpart of NN's 
> {{GlobalStateIdContext}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12976) Introduce ObserverReadProxyProvider

2018-06-29 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528331#comment-16528331
 ] 

Konstantin Shvachko commented on HDFS-12976:


Hey Chao, you are right there is no need to check {{isObserver}}, because 
{{NameNodeHAContext.allowStaleReads()}} does that already.
Could you please take a look at " Inconsistent synchronization" warning from 
the build.
The failed tests passing locally.

> Introduce ObserverReadProxyProvider
> ---
>
> Key: HDFS-12976
> URL: https://issues.apache.org/jira/browse/HDFS-12976
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-12976-HDFS-12943.000.patch, 
> HDFS-12976-HDFS-12943.001.patch, HDFS-12976-HDFS-12943.002.patch, 
> HDFS-12976-HDFS-12943.003.patch, HDFS-12976-HDFS-12943.004.patch, 
> HDFS-12976-HDFS-12943.005.patch, HDFS-12976-HDFS-12943.006.patch, 
> HDFS-12976-HDFS-12943.007.patch, HDFS-12976.WIP.patch
>
>
> {{StandbyReadProxyProvider}} should implement {{FailoverProxyProvider}} 
> interface and be able to submit read requests to ANN and SBN(s).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-167) Rename KeySpaceManager to OzoneManager

2018-06-29 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528300#comment-16528300
 ] 

Arpit Agarwal commented on HDDS-167:


v07 patch fixes the web OzoneManager UI.

It required renaming om.js to OzoneManager.js and a minor fix to 
OzoneManagerHttpServer.java.

> Rename KeySpaceManager to OzoneManager
> --
>
> Key: HDDS-167
> URL: https://issues.apache.org/jira/browse/HDDS-167
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-167.01.patch, HDDS-167.02.patch, HDDS-167.03.patch, 
> HDDS-167.04.patch, HDDS-167.05.patch, HDDS-167.06.patch, HDDS-167.07.patch
>
>
> The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some 
> more changes needed to complete the rename everywhere e.g.
> - command-line
> - documentation
> - unit tests
> - Acceptance tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-167) Rename KeySpaceManager to OzoneManager

2018-06-29 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-167:
---
Attachment: HDDS-167.07.patch

> Rename KeySpaceManager to OzoneManager
> --
>
> Key: HDDS-167
> URL: https://issues.apache.org/jira/browse/HDDS-167
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-167.01.patch, HDDS-167.02.patch, HDDS-167.03.patch, 
> HDDS-167.04.patch, HDDS-167.05.patch, HDDS-167.06.patch, HDDS-167.07.patch
>
>
> The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some 
> more changes needed to complete the rename everywhere e.g.
> - command-line
> - documentation
> - unit tests
> - Acceptance tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-198) Create AuditLogger mechanism to be used by OM, SCM and Datanode

2018-06-29 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528222#comment-16528222
 ] 

Ajay Kumar edited comment on HDDS-198 at 6/29/18 8:59 PM:
--

[~dineshchitlangia] thanks for working on this important functionality. Overall 
patch LGTM. Few comments:
 * AuditLogger
 ** Change default logging level from All to INFO
 ** We can wrap SUCCESS and FAILURE in a inner ENUM as well.
 * package-info
 ** L48-49: simplify
{code:java}
This interface must be implemented by entities whose members will need to be 
logged in the audits.{code}
to
{code:java}
This interface must be implemented by entities requiring audit logging. {code}
L109: typo "log4j3"

 * TestAuditLogger
 ** There are bunch of {{stackTrace}} (L55,124) and {{System.out.println}} 
statements, we can replace them with sl4j logger.
 ** L51 Replace {{file.delete()}} with \{{ 
FileUtils.deleteQuietly(file.delete())}}.
 ** L38 AUDIT is not static,final; checkstyle will probably flag it for 
camelcase.
 ** L43 rename tearUp to setup. (just a convention i guess)
 ** L23,27,30 unused imports.
 ** VerifyLog L108: We can get rid of try catch and throw any exception.
 ** There is TestAuditLogger class in Hdfs as well, shall we rename this one to 
TestOzoneAuditLogger or TestHddsAuditLogger to avoid confusion.
 * DummyAction and OMAction looks pretty identicle. Either we can use OMAction 
for test cases as well or just extend it in inner class inside related 
TestClass.


was (Author: ajayydv):
[~dineshchitlangia] thanks for working on this important functionality. Overall 
patch LGTM. Few comments:
 * AuditLogger
 ** Change default logging level from All to INFO
 ** We can wrap SUCCESS and FAILURE in a inner ENUM as well.
 * package-info
 ** L48-49: simplify
{code:java}
This interface must be implemented by entities whose members will need to be 
logged in the audits.{code}
to
{code:java}
This interface must be implemented by entities requiring audit logging. {code}
L109: typo "log4j3"

 * TestAuditLogger
 ** There are bunch of {{stackTrace}} (L55,124) and {{System.out.println}} 
statements, we can replace them with sl4j logger.
 ** L51 Replace {{file.delete()}} with \{{ 
FileUtils.deleteQuietly(file.delete())}}.
 ** L38 AUDIT is not static,final; checkstyle will probably flag it for 
camelcase.
 ** L43 rename tearUp to setup. (just a convention i guess)
 ** L23,27,30 unused imports.
 ** There is TestAuditLogger class in Hdfs as well, shall we rename this one to 
TestOzoneAuditLogger or TestHddsAuditLogger to avoid confusion.
 * DummyAction and OMAction looks pretty identicle. Either we can use OMAction 
for test cases as well or just extend it in inner class inside related 
TestClass.

> Create AuditLogger mechanism to be used by OM, SCM and Datanode
> ---
>
> Key: HDDS-198
> URL: https://issues.apache.org/jira/browse/HDDS-198
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: audit, log4j2
> Fix For: 0.2.1
>
> Attachments: HDDS-198.001.patch, HDDS-198.002.patch, 
> HDDS-198.003.patch
>
>
> This Jira tracks the work to create a custom AuditLogger which can be used by 
> OM, SCM, Datanode for auditing read/write events.
> The AuditLogger will be designed using log4j2 and leveraging the MarkerFilter 
> approach to be able to turn on/off audit of read/write events by simply 
> changing the log config.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-198) Create AuditLogger mechanism to be used by OM, SCM and Datanode

2018-06-29 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528222#comment-16528222
 ] 

Ajay Kumar commented on HDDS-198:
-

[~dineshchitlangia] thanks for working on this important functionality. Overall 
patch LGTM. Few comments:
 * AuditLogger
 ** Change default logging level from All to INFO
 ** We can wrap SUCCESS and FAILURE in a inner ENUM as well.
 * package-info
 ** L48-49: simplify
{code:java}
This interface must be implemented by entities whose members will need to be 
logged in the audits.{code}
to
{code:java}
This interface must be implemented by entities requiring audit logging. {code}
L109: typo "log4j3"

 * TestAuditLogger
 ** There are bunch of {{stackTrace}} (L55,124) and {{System.out.println}} 
statements, we can replace them with sl4j logger.
 ** L51 Replace {{file.delete()}} with \{{ 
FileUtils.deleteQuietly(file.delete())}}.
 ** L38 AUDIT is not static,final; checkstyle will probably flag it for 
camelcase.
 ** L43 rename tearUp to setup. (just a convention i guess)
 ** L23,27,30 unused imports.
 ** There is TestAuditLogger class in Hdfs as well, shall we rename this one to 
TestOzoneAuditLogger or TestHddsAuditLogger to avoid confusion.
 * DummyAction and OMAction looks pretty identicle. Either we can use OMAction 
for test cases as well or just extend it in inner class inside related 
TestClass.

> Create AuditLogger mechanism to be used by OM, SCM and Datanode
> ---
>
> Key: HDDS-198
> URL: https://issues.apache.org/jira/browse/HDDS-198
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: audit, log4j2
> Fix For: 0.2.1
>
> Attachments: HDDS-198.001.patch, HDDS-198.002.patch, 
> HDDS-198.003.patch
>
>
> This Jira tracks the work to create a custom AuditLogger which can be used by 
> OM, SCM, Datanode for auditing read/write events.
> The AuditLogger will be designed using log4j2 and leveraging the MarkerFilter 
> approach to be able to turn on/off audit of read/write events by simply 
> changing the log config.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13536) [PROVIDED Storage] HA for InMemoryAliasMap

2018-06-29 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-13536:
--
Status: Patch Available  (was: Open)

> [PROVIDED Storage] HA for InMemoryAliasMap
> --
>
> Key: HDFS-13536
> URL: https://issues.apache.org/jira/browse/HDFS-13536
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-13536.001.patch, HDFS-13536.002.patch, 
> HDFS-13536.003.patch
>
>
> Provide HA for the {{InMemoryLevelDBAliasMapServer}} to work with HDFS NN 
> configured in high availability. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13707) [PROVIDED Storage] Fix failing integration tests in ITestProvidedImplementation

2018-06-29 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-13707:
--
Fix Version/s: 3.1.1

> [PROVIDED Storage] Fix failing integration tests in 
> ITestProvidedImplementation
> ---
>
> Key: HDFS-13707
> URL: https://issues.apache.org/jira/browse/HDFS-13707
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
> Fix For: 3.1.1
>
> Attachments: HDFS-13707.001.patch
>
>
> Many tests in {{ITestProvidedImplementation}} use {{TextFileRegionAliasMap}} 
> as the AliasMap. This stores and retrieves path handles for provided 
> locations using UTF-8 encoding. HDFS-13186 implements the path handle 
> semantics for {{RawLocalFileSystem}} using {{LocalFileSystemPathHandle}}. 
> Storing and retrieving these path handles as UTF-8 strings in 
> {{TextFileRegionAliasMap}} results in improper serialization/deserialization, 
> and fails the associated tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13707) [PROVIDED Storage] Fix failing integration tests in ITestProvidedImplementation

2018-06-29 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-13707:
--
Affects Version/s: (was: 3.1.0)

> [PROVIDED Storage] Fix failing integration tests in 
> ITestProvidedImplementation
> ---
>
> Key: HDFS-13707
> URL: https://issues.apache.org/jira/browse/HDFS-13707
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-13707.001.patch
>
>
> Many tests in {{ITestProvidedImplementation}} use {{TextFileRegionAliasMap}} 
> as the AliasMap. This stores and retrieves path handles for provided 
> locations using UTF-8 encoding. HDFS-13186 implements the path handle 
> semantics for {{RawLocalFileSystem}} using {{LocalFileSystemPathHandle}}. 
> Storing and retrieving these path handles as UTF-8 strings in 
> {{TextFileRegionAliasMap}} results in improper serialization/deserialization, 
> and fails the associated tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13707) [PROVIDED Storage] Fix failing integration tests in ITestProvidedImplementation

2018-06-29 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-13707:
--
Affects Version/s: 3.1.0

> [PROVIDED Storage] Fix failing integration tests in 
> ITestProvidedImplementation
> ---
>
> Key: HDFS-13707
> URL: https://issues.apache.org/jira/browse/HDFS-13707
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-13707.001.patch
>
>
> Many tests in {{ITestProvidedImplementation}} use {{TextFileRegionAliasMap}} 
> as the AliasMap. This stores and retrieves path handles for provided 
> locations using UTF-8 encoding. HDFS-13186 implements the path handle 
> semantics for {{RawLocalFileSystem}} using {{LocalFileSystemPathHandle}}. 
> Storing and retrieving these path handles as UTF-8 strings in 
> {{TextFileRegionAliasMap}} results in improper serialization/deserialization, 
> and fails the associated tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13707) [PROVIDED Storage] Fix failing integration tests in ITestProvidedImplementation

2018-06-29 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-13707:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> [PROVIDED Storage] Fix failing integration tests in 
> ITestProvidedImplementation
> ---
>
> Key: HDFS-13707
> URL: https://issues.apache.org/jira/browse/HDFS-13707
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-13707.001.patch
>
>
> Many tests in {{ITestProvidedImplementation}} use {{TextFileRegionAliasMap}} 
> as the AliasMap. This stores and retrieves path handles for provided 
> locations using UTF-8 encoding. HDFS-13186 implements the path handle 
> semantics for {{RawLocalFileSystem}} using {{LocalFileSystemPathHandle}}. 
> Storing and retrieving these path handles as UTF-8 strings in 
> {{TextFileRegionAliasMap}} results in improper serialization/deserialization, 
> and fails the associated tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13536) [PROVIDED Storage] HA for InMemoryAliasMap

2018-06-29 Thread Virajith Jalaparti (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528216#comment-16528216
 ] 

Virajith Jalaparti commented on HDFS-13536:
---

Thanks for taking a look [~elgoiri]. Posted  [^HDFS-13536.003.patch] based on 
your comments.

bq. Can we keep using {{DFSUtilClient#getHaNnRpcAddresses()}} in 
{{ConfiguredFailoverProxyProvider}}?

The change is to enable addresses specified by other config parameters to use 
{{ConfiguredFailoverProxyProvider}}. Existing code will continue using the 
first constructor ({{ConfiguredFailoverProxyProvider(Configuration conf, URI 
uri, Class xface, HAProxyFactory factory)}}) which uses 
{{HdfsClientConfigKeys.DFS_NAMENODE_RPC_ADDRESS_KEY}} as the address. This will 
have the same behavior as using {{DFSUtilClient#getHaNnRpcAddresses()}}. Other 
implementations of {{ConfiguredFailoverProxyProvider}}, such as 
{{InMemoryAliasMapFailoverProxyProvider}} in this patch, can specify other 
addresses.

bq. Is removing {{dfs.provided.aliasmap.inmemory.dnrpc-address}} backwards 
incompatible or is this part of the branch?
No, restored this one.

bq. InMemoryLevelDBAliasMapServer could be refactored to make it reusable.
Moved {{getBindAddress}} to {{DFSUtil}}.

bq. I would add a null check in the close of {{InMemoryLevelDBAliasMapClient}}.
Added this one.



> [PROVIDED Storage] HA for InMemoryAliasMap
> --
>
> Key: HDFS-13536
> URL: https://issues.apache.org/jira/browse/HDFS-13536
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-13536.001.patch, HDFS-13536.002.patch, 
> HDFS-13536.003.patch
>
>
> Provide HA for the {{InMemoryLevelDBAliasMapServer}} to work with HDFS NN 
> configured in high availability. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13536) [PROVIDED Storage] HA for InMemoryAliasMap

2018-06-29 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-13536:
--
Attachment: HDFS-13536.003.patch

> [PROVIDED Storage] HA for InMemoryAliasMap
> --
>
> Key: HDFS-13536
> URL: https://issues.apache.org/jira/browse/HDFS-13536
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-13536.001.patch, HDFS-13536.002.patch, 
> HDFS-13536.003.patch
>
>
> Provide HA for the {{InMemoryLevelDBAliasMapServer}} to work with HDFS NN 
> configured in high availability. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-187) Command status publisher for datanode

2018-06-29 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528189#comment-16528189
 ] 

genericqa commented on HDDS-187:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
18s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 37s{color} | 
{color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 37s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
20s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
19s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
19s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
17s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-hdds_server-scm generated 3 new + 0 unchanged - 
0 fixed = 3 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
2s{color} | {color:green} common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 19s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 16s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m  

[jira] [Commented] (HDFS-11257) Evacuate DN when the remaining is negative

2018-06-29 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528177#comment-16528177
 ] 

genericqa commented on HDFS-11257:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 93m 
46s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-11257 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929740/HDFS-11257.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 2a48986f06bf 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 469b29c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24525/testReport/ |
| Max. process+thread count | 3081 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24525/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Evacuate DN when the 

[jira] [Comment Edited] (HDDS-205) Add metrics to HddsDispatcher

2018-06-29 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528149#comment-16528149
 ] 

Bharat Viswanadham edited comment on HDDS-205 at 6/29/18 7:53 PM:
--

Hi [~shashikant]

Thanks for review.

Slightly modified the logic to instantiate Metrics during dispatcher creation.

And fixed find bug issues and fixed TestKeyValueHandler.

 

Remaining integration test failures fix will be handled as part of HDDS-182 and 
HDDS-204


was (Author: bharatviswa):
Hi [~shashikant]

Thanks for review.

Slightly modified the logic to instantiate Metrics during dispatcher creation.

And fixed find bug issues and fixed TestKeyValueHandler.

 

Remaining integration test failures fix will be handled as part of HDDS-182 and 
HDDS-201

> Add metrics to HddsDispatcher
> -
>
> Key: HDDS-205
> URL: https://issues.apache.org/jira/browse/HDDS-205
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-205-HDDS-48.00.patch, HDDS-205-HDDS-48.01.patch, 
> HDDS-205-HDDS-48.02.patch
>
>
> This patch adds metrics to newly added HddsDispatcher.
> This uses, already existing ContainerMetrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-205) Add metrics to HddsDispatcher

2018-06-29 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528149#comment-16528149
 ] 

Bharat Viswanadham commented on HDDS-205:
-

Hi [~shashikant]

Thanks for review.

Slightly modified the logic to instantiate Metrics during dispatcher creation.

And fixed find bug issues and fixed TestKeyValueHandler.

 

Remaining integration test failures fix will be handled as part of HDDS-182 and 
HDDS-201

> Add metrics to HddsDispatcher
> -
>
> Key: HDDS-205
> URL: https://issues.apache.org/jira/browse/HDDS-205
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-205-HDDS-48.00.patch, HDDS-205-HDDS-48.01.patch, 
> HDDS-205-HDDS-48.02.patch
>
>
> This patch adds metrics to newly added HddsDispatcher.
> This uses, already existing ContainerMetrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-205) Add metrics to HddsDispatcher

2018-06-29 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-205:

Attachment: HDDS-205-HDDS-48.02.patch

> Add metrics to HddsDispatcher
> -
>
> Key: HDDS-205
> URL: https://issues.apache.org/jira/browse/HDDS-205
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-205-HDDS-48.00.patch, HDDS-205-HDDS-48.01.patch, 
> HDDS-205-HDDS-48.02.patch
>
>
> This patch adds metrics to newly added HddsDispatcher.
> This uses, already existing ContainerMetrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-187) Command status publisher for datanode

2018-06-29 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528108#comment-16528108
 ] 

Ajay Kumar edited comment on HDDS-187 at 6/29/18 7:19 PM:
--

[~nandakumar131] thank for review.
1. NPE: You are right, if CommandHandler falls behind, current patch will 
remove it from map before processing. I think right thing to do is to not 
remove PENDING commands from context map. (i.e if status is FAILED/EXECUTED 
then CommandHandler has already processed it)
2. CommandHandlers:Removed stateContext from constructors.
3. Although i have applied checkstyle suggestion but updated code looks more 
ugly and brittle. 80 char limit in modern screens doesn't even cover half of 
the screen and it makes code more unreadable. In HDDS we have overlooked it on 
many instances but i think we should formalize it with some discussions in 
community. 
patch v2 to address review. 




was (Author: ajayydv):
[~nandakumar131] thank for review.
1. NPE: You are right, if CommandHandler falls behind, current patch will 
remove it from map before processing. I think right thing to do is to not 
remove PENDING commands from context map. (i.e if status is FAILED/EXECUTED 
then CommandHandler has already processed it)
2. CommandHandlers:Removed stateContext from constructors.
3. Although i have applied checkstyle suggestion but updated code looks more 
ugly and brittle. 80 char limit in modern screens doesn't even cover half of 
the screen and it makes code more unreadable. In HDDS we have overlooked it on 
many instances but i think we should formalize it with some discussions in 
community. 



> Command status publisher for datanode
> -
>
> Key: HDDS-187
> URL: https://issues.apache.org/jira/browse/HDDS-187
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-187.00.patch, HDDS-187.01.patch, HDDS-187.02.patch
>
>
> Currently SCM sends set of commands for DataNode. DataNode executes them via 
> CommandHandler. This jira intends to create a Command status publisher which 
> will return status of these commands back to the SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-187) Command status publisher for datanode

2018-06-29 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-187:

Attachment: HDDS-187.02.patch

> Command status publisher for datanode
> -
>
> Key: HDDS-187
> URL: https://issues.apache.org/jira/browse/HDDS-187
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-187.00.patch, HDDS-187.01.patch, HDDS-187.02.patch
>
>
> Currently SCM sends set of commands for DataNode. DataNode executes them via 
> CommandHandler. This jira intends to create a Command status publisher which 
> will return status of these commands back to the SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-187) Command status publisher for datanode

2018-06-29 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528108#comment-16528108
 ] 

Ajay Kumar commented on HDDS-187:
-

[~nandakumar131] thank for review.
1. NPE: You are right, if CommandHandler falls behind, current patch will 
remove it from map before processing. I think right thing to do is to not 
remove PENDING commands from context map. (i.e if status is FAILED/EXECUTED 
then CommandHandler has already processed it)
2. CommandHandlers:Removed stateContext from constructors.
3. Although i have applied checkstyle suggestion but updated code looks more 
ugly and brittle. 80 char limit in modern screens doesn't even cover half of 
the screen and it makes code more unreadable. In HDDS we have overlooked it on 
many instances but i think we should formalize it with some discussions in 
community. 



> Command status publisher for datanode
> -
>
> Key: HDDS-187
> URL: https://issues.apache.org/jira/browse/HDDS-187
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-187.00.patch, HDDS-187.01.patch
>
>
> Currently SCM sends set of commands for DataNode. DataNode executes them via 
> CommandHandler. This jira intends to create a Command status publisher which 
> will return status of these commands back to the SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11257) Evacuate DN when the remaining is negative

2018-06-29 Thread Anbang Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528000#comment-16528000
 ] 

Anbang Hu commented on HDFS-11257:
--

{{hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testPendingRecoveryTasks}}
 fails due to  [^HDFS-11257.000.patch].  [^HDFS-11257.001.patch] is an updated 
patch with fix.

{{hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency.testGenerationStampInFuture}}
 failure does not seem to relate to the patch. It passes locally for me.

> Evacuate DN when the remaining is negative
> --
>
> Key: HDFS-11257
> URL: https://issues.apache.org/jira/browse/HDFS-11257
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Íñigo Goiri
>Assignee: Anbang Hu
>Priority: Major
> Attachments: HDFS-11257.000.patch, HDFS-11257.001.patch
>
>
> Datanodes have a maximum amount of disk they can use. This is set using 
> {{dfs.datanode.du.reserved}}. For example, if we have a 1TB disk and we set 
> the reserved to 100GB, the DN can only use ~900GB. However, if we fill the DN 
> and later other processes (e.g., logs or co-located services) start to use 
> the disk space, the remaining space will go to a negative and the used 
> storage >100%.
> The Rebalancer or decommissioning would cover this situation. However, both 
> approaches require administrator intervention while this is a situation that 
> violates the settings. Note that decommisioning, would be too extreme as it 
> would evacuate all the data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11257) Evacuate DN when the remaining is negative

2018-06-29 Thread Anbang Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-11257:
-
Attachment: HDFS-11257.001.patch

> Evacuate DN when the remaining is negative
> --
>
> Key: HDFS-11257
> URL: https://issues.apache.org/jira/browse/HDFS-11257
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Íñigo Goiri
>Assignee: Anbang Hu
>Priority: Major
> Attachments: HDFS-11257.000.patch, HDFS-11257.001.patch
>
>
> Datanodes have a maximum amount of disk they can use. This is set using 
> {{dfs.datanode.du.reserved}}. For example, if we have a 1TB disk and we set 
> the reserved to 100GB, the DN can only use ~900GB. However, if we fill the DN 
> and later other processes (e.g., logs or co-located services) start to use 
> the disk space, the remaining space will go to a negative and the used 
> storage >100%.
> The Rebalancer or decommissioning would cover this situation. However, both 
> approaches require administrator intervention while this is a situation that 
> violates the settings. Note that decommisioning, would be too extreme as it 
> would evacuate all the data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13707) [PROVIDED Storage] Fix failing integration tests in ITestProvidedImplementation

2018-06-29 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527981#comment-16527981
 ] 

Hudson commented on HDFS-13707:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14500 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14500/])
HDFS-13707. [PROVIDED Storage] Fix failing integration tests in (inigoiri: rev 
73746c5da76d5e39df131534a1ec35dfc5d2529b)
* (edit) 
hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/ITestProvidedImplementation.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java


> [PROVIDED Storage] Fix failing integration tests in 
> ITestProvidedImplementation
> ---
>
> Key: HDFS-13707
> URL: https://issues.apache.org/jira/browse/HDFS-13707
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-13707.001.patch
>
>
> Many tests in {{ITestProvidedImplementation}} use {{TextFileRegionAliasMap}} 
> as the AliasMap. This stores and retrieves path handles for provided 
> locations using UTF-8 encoding. HDFS-13186 implements the path handle 
> semantics for {{RawLocalFileSystem}} using {{LocalFileSystemPathHandle}}. 
> Storing and retrieving these path handles as UTF-8 strings in 
> {{TextFileRegionAliasMap}} results in improper serialization/deserialization, 
> and fails the associated tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13708) change Files instead of NativeIO

2018-06-29 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527979#comment-16527979
 ] 

Giovanni Matteo Fumarola commented on HDFS-13708:
-

Thanks [~Jack-Lee]  for opening this jira. However, this working is currently 
going on under HADOOP-15461.

> change Files instead of NativeIO
> 
>
> Key: HDFS-13708
> URL: https://issues.apache.org/jira/browse/HDFS-13708
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: lqjacklee
>Priority: Minor
>
> HDFS depends on the native to invoke the windows releated file operations. 
> Since JDK1.7 introduces the Files to support different FileSystem which 
> supports the File IO operation for different platform. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13707) [PROVIDED Storage] Fix failing integration tests in ITestProvidedImplementation

2018-06-29 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527950#comment-16527950
 ] 

Íñigo Goiri commented on HDFS-13707:


The Yetus run was clean as it can be.
Thanks [~virajith] for the patch.
Committed to trunk.

> [PROVIDED Storage] Fix failing integration tests in 
> ITestProvidedImplementation
> ---
>
> Key: HDFS-13707
> URL: https://issues.apache.org/jira/browse/HDFS-13707
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-13707.001.patch
>
>
> Many tests in {{ITestProvidedImplementation}} use {{TextFileRegionAliasMap}} 
> as the AliasMap. This stores and retrieves path handles for provided 
> locations using UTF-8 encoding. HDFS-13186 implements the path handle 
> semantics for {{RawLocalFileSystem}} using {{LocalFileSystemPathHandle}}. 
> Storing and retrieving these path handles as UTF-8 strings in 
> {{TextFileRegionAliasMap}} results in improper serialization/deserialization, 
> and fails the associated tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-167) Rename KeySpaceManager to OzoneManager

2018-06-29 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527938#comment-16527938
 ] 

Arpit Agarwal commented on HDDS-167:


Thanks [~nandakumar131]. Nice catch. Looking into it.

> Rename KeySpaceManager to OzoneManager
> --
>
> Key: HDDS-167
> URL: https://issues.apache.org/jira/browse/HDDS-167
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-167.01.patch, HDDS-167.02.patch, HDDS-167.03.patch, 
> HDDS-167.04.patch, HDDS-167.05.patch, HDDS-167.06.patch
>
>
> The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some 
> more changes needed to complete the rename everywhere e.g.
> - command-line
> - documentation
> - unit tests
> - Acceptance tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-202) Suppress build error if there are no docs after excluding private annotations

2018-06-29 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-202:
---
Description: 
Seen in hadoop-ozone when building with the Maven hdds profile enabled.

{noformat}
$ mvn clean install -DskipTests -DskipShade -Phdds -Pdist --projects 
hadoop-ozone/ozonefs
...
[INFO] --- maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) @ 
hadoop-ozone-filesystem ---
[INFO]
ExcludePrivateAnnotationsStandardDoclet
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 13.223 s
[INFO] Finished at: 2018-06-28T19:46:49+09:00
[INFO] Final Memory: 122M/1196M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) on 
project hadoop-ozone-filesystem: MavenReportException: Error while generating 
Javadoc:
[ERROR] Exit code: 1 - Picked up _JAVA_OPTIONS: -Duser.language=en
[ERROR] java.lang.ArrayIndexOutOfBoundsException: 0
[ERROR] at 
com.sun.tools.doclets.formats.html.ConfigurationImpl.setTopFile(ConfigurationImpl.java:537)
[ERROR] at 
com.sun.tools.doclets.formats.html.ConfigurationImpl.setSpecificDocletOptions(ConfigurationImpl.java:309)
[ERROR] at 
com.sun.tools.doclets.internal.toolkit.Configuration.setOptions(Configuration.java:560)
[ERROR] at 
com.sun.tools.doclets.internal.toolkit.AbstractDoclet.startGeneration(AbstractDoclet.java:134)
[ERROR] at 
com.sun.tools.doclets.internal.toolkit.AbstractDoclet.start(AbstractDoclet.java:82)
[ERROR] at 
com.sun.tools.doclets.formats.html.HtmlDoclet.start(HtmlDoclet.java:80)
[ERROR] at 
com.sun.tools.doclets.standard.Standard.start(Standard.java:39)
[ERROR] at 
org.apache.hadoop.classification.tools.ExcludePrivateAnnotationsStandardDoclet.start(ExcludePrivateAnnotationsStandardDoclet.java:41)
[ERROR] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[ERROR] at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[ERROR] at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[ERROR] at java.lang.reflect.Method.invoke(Method.java:498)
[ERROR] at 
com.sun.tools.javadoc.DocletInvoker.invoke(DocletInvoker.java:310)
[ERROR] at 
com.sun.tools.javadoc.DocletInvoker.start(DocletInvoker.java:189)
[ERROR] at com.sun.tools.javadoc.Start.parseAndExecute(Start.java:366)
[ERROR] at com.sun.tools.javadoc.Start.begin(Start.java:219)
[ERROR] at com.sun.tools.javadoc.Start.begin(Start.java:205)
[ERROR] at com.sun.tools.javadoc.Main.execute(Main.java:64)
[ERROR] at com.sun.tools.javadoc.Main.main(Main.java:54)
{noformat}

  was:
{noformat}
$ mvn clean install -DskipTests -DskipShade -Phdds -Pdist --projects 
hadoop-ozone/ozonefs
...
[INFO] --- maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) @ 
hadoop-ozone-filesystem ---
[INFO]
ExcludePrivateAnnotationsStandardDoclet
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 13.223 s
[INFO] Finished at: 2018-06-28T19:46:49+09:00
[INFO] Final Memory: 122M/1196M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) on 
project hadoop-ozone-filesystem: MavenReportException: Error while generating 
Javadoc:
[ERROR] Exit code: 1 - Picked up _JAVA_OPTIONS: -Duser.language=en
[ERROR] java.lang.ArrayIndexOutOfBoundsException: 0
[ERROR] at 
com.sun.tools.doclets.formats.html.ConfigurationImpl.setTopFile(ConfigurationImpl.java:537)
[ERROR] at 
com.sun.tools.doclets.formats.html.ConfigurationImpl.setSpecificDocletOptions(ConfigurationImpl.java:309)
[ERROR] at 
com.sun.tools.doclets.internal.toolkit.Configuration.setOptions(Configuration.java:560)
[ERROR] at 
com.sun.tools.doclets.internal.toolkit.AbstractDoclet.startGeneration(AbstractDoclet.java:134)
[ERROR] at 
com.sun.tools.doclets.internal.toolkit.AbstractDoclet.start(AbstractDoclet.java:82)
[ERROR] at 
com.sun.tools.doclets.formats.html.HtmlDoclet.start(HtmlDoclet.java:80)
[ERROR] at 
com.sun.tools.doclets.standard.Standard.start(Standard.java:39)
[ERROR] at 
org.apache.hadoop.classification.tools.ExcludePrivateAnnotationsStandardDoclet.start(ExcludePrivateAnnotationsStandardDoclet.java:41)
[ERROR] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[ERROR] at 

[jira] [Updated] (HDDS-202) Suppress build error if there are no docs after excluding private annotations

2018-06-29 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-202:
---
Summary: Suppress build error if there are no docs after excluding private 
annotations  (was: Doclet build fails in ozonefs)

> Suppress build error if there are no docs after excluding private annotations
> -
>
> Key: HDDS-202
> URL: https://issues.apache.org/jira/browse/HDDS-202
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDDS-202.1.patch
>
>
> {noformat}
> $ mvn clean install -DskipTests -DskipShade -Phdds -Pdist --projects 
> hadoop-ozone/ozonefs
> ...
> [INFO] --- maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) @ 
> hadoop-ozone-filesystem ---
> [INFO]
> ExcludePrivateAnnotationsStandardDoclet
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 13.223 s
> [INFO] Finished at: 2018-06-28T19:46:49+09:00
> [INFO] Final Memory: 122M/1196M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) 
> on project hadoop-ozone-filesystem: MavenReportException: Error while 
> generating Javadoc:
> [ERROR] Exit code: 1 - Picked up _JAVA_OPTIONS: -Duser.language=en
> [ERROR] java.lang.ArrayIndexOutOfBoundsException: 0
> [ERROR]   at 
> com.sun.tools.doclets.formats.html.ConfigurationImpl.setTopFile(ConfigurationImpl.java:537)
> [ERROR]   at 
> com.sun.tools.doclets.formats.html.ConfigurationImpl.setSpecificDocletOptions(ConfigurationImpl.java:309)
> [ERROR]   at 
> com.sun.tools.doclets.internal.toolkit.Configuration.setOptions(Configuration.java:560)
> [ERROR]   at 
> com.sun.tools.doclets.internal.toolkit.AbstractDoclet.startGeneration(AbstractDoclet.java:134)
> [ERROR]   at 
> com.sun.tools.doclets.internal.toolkit.AbstractDoclet.start(AbstractDoclet.java:82)
> [ERROR]   at 
> com.sun.tools.doclets.formats.html.HtmlDoclet.start(HtmlDoclet.java:80)
> [ERROR]   at 
> com.sun.tools.doclets.standard.Standard.start(Standard.java:39)
> [ERROR]   at 
> org.apache.hadoop.classification.tools.ExcludePrivateAnnotationsStandardDoclet.start(ExcludePrivateAnnotationsStandardDoclet.java:41)
> [ERROR]   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [ERROR]   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> [ERROR]   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [ERROR]   at java.lang.reflect.Method.invoke(Method.java:498)
> [ERROR]   at 
> com.sun.tools.javadoc.DocletInvoker.invoke(DocletInvoker.java:310)
> [ERROR]   at 
> com.sun.tools.javadoc.DocletInvoker.start(DocletInvoker.java:189)
> [ERROR]   at com.sun.tools.javadoc.Start.parseAndExecute(Start.java:366)
> [ERROR]   at com.sun.tools.javadoc.Start.begin(Start.java:219)
> [ERROR]   at com.sun.tools.javadoc.Start.begin(Start.java:205)
> [ERROR]   at com.sun.tools.javadoc.Main.execute(Main.java:64)
> [ERROR]   at com.sun.tools.javadoc.Main.main(Main.java:54)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-202) Doclet build fails in ozonefs

2018-06-29 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527901#comment-16527901
 ] 

Arpit Agarwal commented on HDDS-202:


This should be moved to the Hadoop Project.

I'll take a look although I am not qualified to comment on this part of the 
code.

> Doclet build fails in ozonefs
> -
>
> Key: HDDS-202
> URL: https://issues.apache.org/jira/browse/HDDS-202
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDDS-202.1.patch
>
>
> {noformat}
> $ mvn clean install -DskipTests -DskipShade -Phdds -Pdist --projects 
> hadoop-ozone/ozonefs
> ...
> [INFO] --- maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) @ 
> hadoop-ozone-filesystem ---
> [INFO]
> ExcludePrivateAnnotationsStandardDoclet
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 13.223 s
> [INFO] Finished at: 2018-06-28T19:46:49+09:00
> [INFO] Final Memory: 122M/1196M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) 
> on project hadoop-ozone-filesystem: MavenReportException: Error while 
> generating Javadoc:
> [ERROR] Exit code: 1 - Picked up _JAVA_OPTIONS: -Duser.language=en
> [ERROR] java.lang.ArrayIndexOutOfBoundsException: 0
> [ERROR]   at 
> com.sun.tools.doclets.formats.html.ConfigurationImpl.setTopFile(ConfigurationImpl.java:537)
> [ERROR]   at 
> com.sun.tools.doclets.formats.html.ConfigurationImpl.setSpecificDocletOptions(ConfigurationImpl.java:309)
> [ERROR]   at 
> com.sun.tools.doclets.internal.toolkit.Configuration.setOptions(Configuration.java:560)
> [ERROR]   at 
> com.sun.tools.doclets.internal.toolkit.AbstractDoclet.startGeneration(AbstractDoclet.java:134)
> [ERROR]   at 
> com.sun.tools.doclets.internal.toolkit.AbstractDoclet.start(AbstractDoclet.java:82)
> [ERROR]   at 
> com.sun.tools.doclets.formats.html.HtmlDoclet.start(HtmlDoclet.java:80)
> [ERROR]   at 
> com.sun.tools.doclets.standard.Standard.start(Standard.java:39)
> [ERROR]   at 
> org.apache.hadoop.classification.tools.ExcludePrivateAnnotationsStandardDoclet.start(ExcludePrivateAnnotationsStandardDoclet.java:41)
> [ERROR]   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [ERROR]   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> [ERROR]   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [ERROR]   at java.lang.reflect.Method.invoke(Method.java:498)
> [ERROR]   at 
> com.sun.tools.javadoc.DocletInvoker.invoke(DocletInvoker.java:310)
> [ERROR]   at 
> com.sun.tools.javadoc.DocletInvoker.start(DocletInvoker.java:189)
> [ERROR]   at com.sun.tools.javadoc.Start.parseAndExecute(Start.java:366)
> [ERROR]   at com.sun.tools.javadoc.Start.begin(Start.java:219)
> [ERROR]   at com.sun.tools.javadoc.Start.begin(Start.java:205)
> [ERROR]   at com.sun.tools.javadoc.Main.execute(Main.java:64)
> [ERROR]   at com.sun.tools.javadoc.Main.main(Main.java:54)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13710) RBF: setQuota and getQuotaUsage should check the dfs.federation.router.quota.enable

2018-06-29 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee reassigned HDFS-13710:
--

Assignee: (was: Shashikant Banerjee)

> RBF:  setQuota and getQuotaUsage should check the 
> dfs.federation.router.quota.enable
> 
>
> Key: HDFS-13710
> URL: https://issues.apache.org/jira/browse/HDFS-13710
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, hdfs
>Affects Versions: 2.9.1, 3.0.3
>Reporter: yanghuafeng
>Priority: Major
> Attachments: HDFS-13710.patch
>
>
> when I use the command below, some exceptions happened.
>  
> {code:java}
> hdfs dfsrouteradmin -setQuota /tmp -ssQuota 1G 
> {code}
>  the logs follow.
> {code:java}
> Successfully set quota for mount point /tmp
> {code}
> It looks like the quota is set successfully, but some exceptions happen in 
> the rbf server log.
> {code:java}
> java.io.IOException: No remote locations available
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1002)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:967)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:940)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:84)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:255)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:238)
> at 
> org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocolServerSideTranslatorPB.updateMountTableEntry(RouterAdminProtocolServerSideTranslatorPB.java:179)
> at 
> org.apache.hadoop.hdfs.protocol.proto.RouterProtocolProtos$RouterAdminProtocolService$2.callBlockingMethod(RouterProtocolProtos.java:259)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> {code}
> I find the dfs.federation.router.quota.enable is false by default. And it 
> causes the problem. I think we should check the parameter when we call 
> setQuota and getQuotaUsage. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-13710) RBF: setQuota and getQuotaUsage should check the dfs.federation.router.quota.enable

2018-06-29 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-13710:
---
Comment: was deleted

(was: Thanks [~hfyang20071], for reporting the Issue. I would like to work on 
it.)

> RBF:  setQuota and getQuotaUsage should check the 
> dfs.federation.router.quota.enable
> 
>
> Key: HDFS-13710
> URL: https://issues.apache.org/jira/browse/HDFS-13710
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, hdfs
>Affects Versions: 2.9.1, 3.0.3
>Reporter: yanghuafeng
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13710.patch
>
>
> when I use the command below, some exceptions happened.
>  
> {code:java}
> hdfs dfsrouteradmin -setQuota /tmp -ssQuota 1G 
> {code}
>  the logs follow.
> {code:java}
> Successfully set quota for mount point /tmp
> {code}
> It looks like the quota is set successfully, but some exceptions happen in 
> the rbf server log.
> {code:java}
> java.io.IOException: No remote locations available
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1002)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:967)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:940)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:84)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:255)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:238)
> at 
> org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocolServerSideTranslatorPB.updateMountTableEntry(RouterAdminProtocolServerSideTranslatorPB.java:179)
> at 
> org.apache.hadoop.hdfs.protocol.proto.RouterProtocolProtos$RouterAdminProtocolService$2.callBlockingMethod(RouterProtocolProtos.java:259)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> {code}
> I find the dfs.federation.router.quota.enable is false by default. And it 
> causes the problem. I think we should check the parameter when we call 
> setQuota and getQuotaUsage. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13710) RBF: setQuota and getQuotaUsage should check the dfs.federation.router.quota.enable

2018-06-29 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527895#comment-16527895
 ] 

Shashikant Banerjee commented on HDFS-13710:


Thanks [~hfyang20071], for reporting the Issue. I would like to work on it.

> RBF:  setQuota and getQuotaUsage should check the 
> dfs.federation.router.quota.enable
> 
>
> Key: HDFS-13710
> URL: https://issues.apache.org/jira/browse/HDFS-13710
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, hdfs
>Affects Versions: 2.9.1, 3.0.3
>Reporter: yanghuafeng
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13710.patch
>
>
> when I use the command below, some exceptions happened.
>  
> {code:java}
> hdfs dfsrouteradmin -setQuota /tmp -ssQuota 1G 
> {code}
>  the logs follow.
> {code:java}
> Successfully set quota for mount point /tmp
> {code}
> It looks like the quota is set successfully, but some exceptions happen in 
> the rbf server log.
> {code:java}
> java.io.IOException: No remote locations available
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1002)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:967)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:940)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:84)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:255)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:238)
> at 
> org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocolServerSideTranslatorPB.updateMountTableEntry(RouterAdminProtocolServerSideTranslatorPB.java:179)
> at 
> org.apache.hadoop.hdfs.protocol.proto.RouterProtocolProtos$RouterAdminProtocolService$2.callBlockingMethod(RouterProtocolProtos.java:259)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> {code}
> I find the dfs.federation.router.quota.enable is false by default. And it 
> causes the problem. I think we should check the parameter when we call 
> setQuota and getQuotaUsage. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13710) RBF: setQuota and getQuotaUsage should check the dfs.federation.router.quota.enable

2018-06-29 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee reassigned HDFS-13710:
--

Assignee: Shashikant Banerjee

> RBF:  setQuota and getQuotaUsage should check the 
> dfs.federation.router.quota.enable
> 
>
> Key: HDFS-13710
> URL: https://issues.apache.org/jira/browse/HDFS-13710
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, hdfs
>Affects Versions: 2.9.1, 3.0.3
>Reporter: yanghuafeng
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13710.patch
>
>
> when I use the command below, some exceptions happened.
>  
> {code:java}
> hdfs dfsrouteradmin -setQuota /tmp -ssQuota 1G 
> {code}
>  the logs follow.
> {code:java}
> Successfully set quota for mount point /tmp
> {code}
> It looks like the quota is set successfully, but some exceptions happen in 
> the rbf server log.
> {code:java}
> java.io.IOException: No remote locations available
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1002)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:967)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:940)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:84)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:255)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:238)
> at 
> org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocolServerSideTranslatorPB.updateMountTableEntry(RouterAdminProtocolServerSideTranslatorPB.java:179)
> at 
> org.apache.hadoop.hdfs.protocol.proto.RouterProtocolProtos$RouterAdminProtocolService$2.callBlockingMethod(RouterProtocolProtos.java:259)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> {code}
> I find the dfs.federation.router.quota.enable is false by default. And it 
> causes the problem. I think we should check the parameter when we call 
> setQuota and getQuotaUsage. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-206) default port number taken by ksm is 9862 while listing the volumes

2018-06-29 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527871#comment-16527871
 ] 

Shashikant Banerjee edited comment on HDDS-206 at 6/29/18 4:00 PM:
---

Thanks [~anu], for reviewing. The acceptance tests are working for me. 
{code:java}
==
Acceptance
==
Acceptance.Basic
==
Acceptance.Basic.Basic :: Smoketest ozone cluster startup
==
Test rest interface | PASS |
--
Check webui static resources | PASS |
--
Start freon testing | PASS |
--
Acceptance.Basic.Basic :: Smoketest ozone cluster startup | PASS |
3 critical tests, 3 passed, 0 failed
3 tests total, 3 passed, 0 failed
==
Acceptance.Basic.Ozone-Shell :: Test ozone shell CLI usage
==
RestClient without http port | PASS |
--
RestClient with http port | PASS |
--
RestClient without host name | PASS |
--
RpcClient with port | PASS |
--
RpcClient without host | PASS |
--
RpcClient without scheme | PASS |
--
Acceptance.Basic.Ozone-Shell :: Test ozone shell CLI usage | PASS |
6 critical tests, 6 passed, 0 failed
6 tests total, 6 passed, 0 failed
==
Acceptance.Basic | PASS |
9 critical tests, 9 passed, 0 failed
9 tests total, 9 passed, 0 failed
==
Acceptance.Ozonefs
==
Acceptance.Ozonefs.Ozonefs :: Ozonefs test
==
Create volume and bucket | PASS |
--
Check volume from ozonefs | PASS |
--
Create directory from ozonefs | PASS |
--
Acceptance.Ozonefs.Ozonefs :: Ozonefs test | PASS |
3 critical tests, 3 passed, 0 failed
3 tests total, 3 passed, 0 failed
==
Acceptance.Ozonefs | PASS |
3 critical tests, 3 passed, 0 failed
3 tests total, 3 passed, 0 failed
==
Acceptance | PASS |
12 critical tests, 12 passed, 0 failed
12 tests total, 12 passed, 0 failed
=={code}
 

 The jenkins runs don't show any failures. The findbug warning is not related 
to the patch as well.


was (Author: shashikant):
Thanks [~anu], for reviewing. The acceptance tests are working for me. 
{code:java}
==
Acceptance
==
Acceptance.Basic
==
Acceptance.Basic.Basic :: Smoketest ozone cluster startup
==
Test rest interface | PASS |
--
Check webui static resources | PASS |
--
Start freon testing | PASS |
--
Acceptance.Basic.Basic :: Smoketest ozone cluster startup | PASS |
3 critical tests, 3 passed, 0 failed
3 tests total, 3 passed, 0 failed
==
Acceptance.Basic.Ozone-Shell :: Test ozone shell CLI usage
==
RestClient without http 

[jira] [Commented] (HDDS-206) default port number taken by ksm is 9862 while listing the volumes

2018-06-29 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527871#comment-16527871
 ] 

Shashikant Banerjee commented on HDDS-206:
--

Thanks [~anu], for reviewing. The acceptance tests are working for me. 
{code:java}
==
Acceptance
==
Acceptance.Basic
==
Acceptance.Basic.Basic :: Smoketest ozone cluster startup
==
Test rest interface | PASS |
--
Check webui static resources | PASS |
--
Start freon testing | PASS |
--
Acceptance.Basic.Basic :: Smoketest ozone cluster startup | PASS |
3 critical tests, 3 passed, 0 failed
3 tests total, 3 passed, 0 failed
==
Acceptance.Basic.Ozone-Shell :: Test ozone shell CLI usage
==
RestClient without http port | PASS |
--
RestClient with http port | PASS |
--
RestClient without host name | PASS |
--
RpcClient with port | PASS |
--
RpcClient without host | PASS |
--
RpcClient without scheme | PASS |
--
Acceptance.Basic.Ozone-Shell :: Test ozone shell CLI usage | PASS |
6 critical tests, 6 passed, 0 failed
6 tests total, 6 passed, 0 failed
==
Acceptance.Basic | PASS |
9 critical tests, 9 passed, 0 failed
9 tests total, 9 passed, 0 failed
==
Acceptance.Ozonefs
==
Acceptance.Ozonefs.Ozonefs :: Ozonefs test
==
Create volume and bucket | PASS |
--
Check volume from ozonefs | PASS |
--
Create directory from ozonefs | PASS |
--
Acceptance.Ozonefs.Ozonefs :: Ozonefs test | PASS |
3 critical tests, 3 passed, 0 failed
3 tests total, 3 passed, 0 failed
==
Acceptance.Ozonefs | PASS |
3 critical tests, 3 passed, 0 failed
3 tests total, 3 passed, 0 failed
==
Acceptance | PASS |
12 critical tests, 12 passed, 0 failed
12 tests total, 12 passed, 0 failed
=={code}
 

 The jenkins runs don't show any failures. The findbug warning is not related 
to the patch as well.

> default port number taken by ksm is 9862 while listing the volumes
> --
>
> Key: HDDS-206
> URL: https://issues.apache.org/jira/browse/HDDS-206
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-206.00.patch
>
>
> Here is the output of ozone -listVolume command without any port mentioned .
> By default, it chooses the port number as 9862 which is not mentioned in the 
> ozone-site.xml
> {noformat}
> [root@ozone-vm bin]# ./ozone oz -listVolume o3://127.0.0.1/
> 2018-06-29 04:42:20,652 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2018-06-29 04:42:21,914 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:22,915 INFO ipc.Client: Retrying connect to server: 
> 

[jira] [Commented] (HDDS-206) default port number taken by ksm is 9862 while listing the volumes

2018-06-29 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527844#comment-16527844
 ] 

genericqa commented on HDDS-206:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-ozone/ozone-manager in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-206 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929715/HDDS-206.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7396f2a426bc 3.13.0-141-generic #190-Ubuntu SMP Fri Jan 19 
12:52:38 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git 

[jira] [Commented] (HDDS-202) Doclet build fails in ozonefs

2018-06-29 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527837#comment-16527837
 ] 

Anu Engineer commented on HDDS-202:
---

[~xyao], [~arpitagarwal], [~ste...@apache.org] This seems safe to me and fixes 
a Doc building issue in Ozone. Since this change is in common, I want to make 
sure I don't impact some other parts of the project. Could anyone of you take a 
look at this please ?

 

> Doclet build fails in ozonefs
> -
>
> Key: HDDS-202
> URL: https://issues.apache.org/jira/browse/HDDS-202
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDDS-202.1.patch
>
>
> {noformat}
> $ mvn clean install -DskipTests -DskipShade -Phdds -Pdist --projects 
> hadoop-ozone/ozonefs
> ...
> [INFO] --- maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) @ 
> hadoop-ozone-filesystem ---
> [INFO]
> ExcludePrivateAnnotationsStandardDoclet
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 13.223 s
> [INFO] Finished at: 2018-06-28T19:46:49+09:00
> [INFO] Final Memory: 122M/1196M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) 
> on project hadoop-ozone-filesystem: MavenReportException: Error while 
> generating Javadoc:
> [ERROR] Exit code: 1 - Picked up _JAVA_OPTIONS: -Duser.language=en
> [ERROR] java.lang.ArrayIndexOutOfBoundsException: 0
> [ERROR]   at 
> com.sun.tools.doclets.formats.html.ConfigurationImpl.setTopFile(ConfigurationImpl.java:537)
> [ERROR]   at 
> com.sun.tools.doclets.formats.html.ConfigurationImpl.setSpecificDocletOptions(ConfigurationImpl.java:309)
> [ERROR]   at 
> com.sun.tools.doclets.internal.toolkit.Configuration.setOptions(Configuration.java:560)
> [ERROR]   at 
> com.sun.tools.doclets.internal.toolkit.AbstractDoclet.startGeneration(AbstractDoclet.java:134)
> [ERROR]   at 
> com.sun.tools.doclets.internal.toolkit.AbstractDoclet.start(AbstractDoclet.java:82)
> [ERROR]   at 
> com.sun.tools.doclets.formats.html.HtmlDoclet.start(HtmlDoclet.java:80)
> [ERROR]   at 
> com.sun.tools.doclets.standard.Standard.start(Standard.java:39)
> [ERROR]   at 
> org.apache.hadoop.classification.tools.ExcludePrivateAnnotationsStandardDoclet.start(ExcludePrivateAnnotationsStandardDoclet.java:41)
> [ERROR]   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [ERROR]   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> [ERROR]   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [ERROR]   at java.lang.reflect.Method.invoke(Method.java:498)
> [ERROR]   at 
> com.sun.tools.javadoc.DocletInvoker.invoke(DocletInvoker.java:310)
> [ERROR]   at 
> com.sun.tools.javadoc.DocletInvoker.start(DocletInvoker.java:189)
> [ERROR]   at com.sun.tools.javadoc.Start.parseAndExecute(Start.java:366)
> [ERROR]   at com.sun.tools.javadoc.Start.begin(Start.java:219)
> [ERROR]   at com.sun.tools.javadoc.Start.begin(Start.java:205)
> [ERROR]   at com.sun.tools.javadoc.Main.execute(Main.java:64)
> [ERROR]   at com.sun.tools.javadoc.Main.main(Main.java:54)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-206) default port number taken by ksm is 9862 while listing the volumes

2018-06-29 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527798#comment-16527798
 ] 

Anu Engineer commented on HDDS-206:
---

[~nilotpalnandi] Thanks for filing this issue. [~shashikant] Thanks for quickly 
fixing this issue. 

I ran the acceptance tests on my machine and I see some failures. Maybe we need 
to fix some issues in the acceptance tests too ?
{noformat}
==
Acceptance
==
Acceptance.Basic
==
Acceptance.Basic.Basic :: Smoketest ozone cluster startup
==
Test rest interface   | PASS |
--
Check webui static resources  | PASS |
--
Start freon testing   | PASS |
--
Acceptance.Basic.Basic :: Smoketest ozone cluster startup | PASS |
3 critical tests, 3 passed, 0 failed
3 tests total, 3 passed, 0 failed
==
Acceptance.Basic.Ozone-Shell :: Test ozone shell CLI usage
==
RestClient without http port  | PASS |
--
RestClient with http port | PASS |
--
RestClient without host name  | PASS |
--
RpcClient with port   | PASS |
--
RpcClient without host    | FAIL |
Test timeout 2 minutes exceeded.
--
RpcClient without scheme  | PASS |
--
Acceptance.Basic.Ozone-Shell :: Test ozone shell CLI usage    | FAIL |
6 critical tests, 5 passed, 1 failed
6 tests total, 5 passed, 1 failed
==
Acceptance.Basic  | FAIL |
9 critical tests, 8 passed, 1 failed
9 tests total, 8 passed, 1 failed
==
Acceptance.Ozonefs
==
Acceptance.Ozonefs.Ozonefs :: Ozonefs test
==
Create volume and bucket  | PASS |
--
Check volume from ozonefs | PASS |
--
Create directory from ozonefs | PASS |
--
Acceptance.Ozonefs.Ozonefs :: Ozonefs test    | PASS |
3 critical tests, 3 passed, 0 failed
3 tests total, 3 passed, 0 failed
==
Acceptance.Ozonefs    | PASS |
3 critical tests, 3 passed, 0 failed
3 tests total, 3 passed, 0 failed
==
Acceptance    | FAIL |
12 critical tests, 11 passed, 1 failed
12 tests total, 11 passed, 1 failed
==

{noformat}

> default port number taken by ksm is 9862 while listing the volumes
> --
>
> Key: HDDS-206
> URL: https://issues.apache.org/jira/browse/HDDS-206
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
> 

[jira] [Updated] (HDDS-206) default port number taken by ksm is 9862 while listing the volumes

2018-06-29 Thread Nilotpal Nandi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nilotpal Nandi updated HDDS-206:

Status: Patch Available  (was: Open)

> default port number taken by ksm is 9862 while listing the volumes
> --
>
> Key: HDDS-206
> URL: https://issues.apache.org/jira/browse/HDDS-206
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-206.00.patch
>
>
> Here is the output of ozone -listVolume command without any port mentioned .
> By default, it chooses the port number as 9862 which is not mentioned in the 
> ozone-site.xml
> {noformat}
> [root@ozone-vm bin]# ./ozone oz -listVolume o3://127.0.0.1/
> 2018-06-29 04:42:20,652 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2018-06-29 04:42:21,914 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:22,915 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 1 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:23,917 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 2 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:24,925 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 3 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:25,928 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 4 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:26,931 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 5 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:27,932 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 6 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:28,934 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 7 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:29,935 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 8 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:30,938 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 9 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:31,075 [main] ERROR - Couldn't create protocol class 
> org.apache.hadoop.ozone.client.rpc.RpcClient exception:
> java.lang.reflect.InvocationTargetException
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:292)
>  at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:172)
>  at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:156)
>  at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:111)
>  at org.apache.hadoop.ozone.web.ozShell.Handler.verifyURI(Handler.java:96)
>  at 
> org.apache.hadoop.ozone.web.ozShell.volume.ListVolumeHandler.execute(ListVolumeHandler.java:80)
>  at org.apache.hadoop.ozone.web.ozShell.Shell.dispatch(Shell.java:395)
>  at org.apache.hadoop.ozone.web.ozShell.Shell.run(Shell.java:135)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>  at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:114)
> Caused by: java.net.ConnectException: Call From ozone-vm/10.200.5.166 

[jira] [Updated] (HDDS-206) default port number taken by ksm is 9862 while listing the volumes

2018-06-29 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-206:
-
Attachment: HDDS-206.00.patch

> default port number taken by ksm is 9862 while listing the volumes
> --
>
> Key: HDDS-206
> URL: https://issues.apache.org/jira/browse/HDDS-206
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-206.00.patch
>
>
> Here is the output of ozone -listVolume command without any port mentioned .
> By default, it chooses the port number as 9862 which is not mentioned in the 
> ozone-site.xml
> {noformat}
> [root@ozone-vm bin]# ./ozone oz -listVolume o3://127.0.0.1/
> 2018-06-29 04:42:20,652 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2018-06-29 04:42:21,914 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:22,915 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 1 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:23,917 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 2 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:24,925 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 3 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:25,928 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 4 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:26,931 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 5 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:27,932 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 6 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:28,934 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 7 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:29,935 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 8 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:30,938 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 9 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:31,075 [main] ERROR - Couldn't create protocol class 
> org.apache.hadoop.ozone.client.rpc.RpcClient exception:
> java.lang.reflect.InvocationTargetException
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:292)
>  at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:172)
>  at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:156)
>  at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:111)
>  at org.apache.hadoop.ozone.web.ozShell.Handler.verifyURI(Handler.java:96)
>  at 
> org.apache.hadoop.ozone.web.ozShell.volume.ListVolumeHandler.execute(ListVolumeHandler.java:80)
>  at org.apache.hadoop.ozone.web.ozShell.Shell.dispatch(Shell.java:395)
>  at org.apache.hadoop.ozone.web.ozShell.Shell.run(Shell.java:135)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>  at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:114)
> Caused by: java.net.ConnectException: Call From 

[jira] [Commented] (HDFS-13710) RBF: setQuota and getQuotaUsage should check the dfs.federation.router.quota.enable

2018-06-29 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527567#comment-16527567
 ] 

genericqa commented on HDFS-13710:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
31s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13710 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929709/HDFS-13710.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7e4ce0e90fb0 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e4d7227 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24524/testReport/ |
| Max. process+thread count | 967 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24524/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF:  setQuota and getQuotaUsage should 

[jira] [Commented] (HDFS-13706) ClientGCIContext should be correctly named ClientGSIContext

2018-06-29 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527490#comment-16527490
 ] 

Erik Krogen commented on HDFS-13706:


+1 I have been wondering what GCI stood for

> ClientGCIContext should be correctly named ClientGSIContext
> ---
>
> Key: HDFS-13706
> URL: https://issues.apache.org/jira/browse/HDFS-13706
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Konstantin Shvachko
>Priority: Major
> Attachments: HDFS-13706-HDFS-12943.001.patch
>
>
> GSI stands for Global State Id. It's a client-side counterpart of NN's 
> {{GlobalStateIdContext}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-207) ozone listVolume command accepts random values as argument

2018-06-29 Thread Nilotpal Nandi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nilotpal Nandi updated HDDS-207:

Description: 
When no argument from listVolume is provided, it complains.

But a random argument is provided for listVolume command, it accepts and 
displays all the volumes.
{noformat}
[root@ozone-vm bin]# ./ozone oz -listVolume
Missing argument for option: listVolumeERROR: null
[root@ozone-vm bin]# ./ozone oz -listVolume abcdefghijk
2018-06-29 07:09:43,451 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
[ {
 "owner" : {
 "name" : "root"
 },
 "quota" : {
 "unit" : "TB",
 "size" : 1048576
 },
 "volumeName" : "nnvolume1",
 "createdOn" : "Sun, 18 Sep +50444 15:12:11 GMT",
 "createdBy" : "root"
}, {
 "owner" : {
 "name" : "root"
 },
 "quota" : {
 "unit" : "TB",
 "size" : 1048576
 },
 "volumeName" : "nnvolume2",
 "createdOn" : "Tue, 27 Sep +50444 13:05:43 GMT",
 "createdBy" : "root"
} ]

{noformat}
expectation:

It should not accept random values as argument

  was:
When argument from listVolume is provided, it complains.

But a random argument is provided for listVolume command, it accepts and 
displays all the volumes.
{noformat}



[root@ozone-vm bin]# ./ozone oz -listVolume
Missing argument for option: listVolumeERROR: null
[root@ozone-vm bin]# ./ozone oz -listVolume abcdefghijk
2018-06-29 07:09:43,451 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
[ {
 "owner" : {
 "name" : "root"
 },
 "quota" : {
 "unit" : "TB",
 "size" : 1048576
 },
 "volumeName" : "nnvolume1",
 "createdOn" : "Sun, 18 Sep +50444 15:12:11 GMT",
 "createdBy" : "root"
}, {
 "owner" : {
 "name" : "root"
 },
 "quota" : {
 "unit" : "TB",
 "size" : 1048576
 },
 "volumeName" : "nnvolume2",
 "createdOn" : "Tue, 27 Sep +50444 13:05:43 GMT",
 "createdBy" : "root"
} ]

{noformat}
expectation:

It should not accept random values as argument


> ozone listVolume command accepts random values as argument
> --
>
> Key: HDDS-207
> URL: https://issues.apache.org/jira/browse/HDDS-207
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
>
> When no argument from listVolume is provided, it complains.
> But a random argument is provided for listVolume command, it accepts and 
> displays all the volumes.
> {noformat}
> [root@ozone-vm bin]# ./ozone oz -listVolume
> Missing argument for option: listVolumeERROR: null
> [root@ozone-vm bin]# ./ozone oz -listVolume abcdefghijk
> 2018-06-29 07:09:43,451 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
>  "owner" : {
>  "name" : "root"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "nnvolume1",
>  "createdOn" : "Sun, 18 Sep +50444 15:12:11 GMT",
>  "createdBy" : "root"
> }, {
>  "owner" : {
>  "name" : "root"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "nnvolume2",
>  "createdOn" : "Tue, 27 Sep +50444 13:05:43 GMT",
>  "createdBy" : "root"
> } ]
> {noformat}
> expectation:
> It should not accept random values as argument



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-207) ozone listVolume command accepts random values as argument

2018-06-29 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain reassigned HDDS-207:


Assignee: Lokesh Jain

> ozone listVolume command accepts random values as argument
> --
>
> Key: HDDS-207
> URL: https://issues.apache.org/jira/browse/HDDS-207
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
>
> When argument from listVolume is provided, it complains.
> But a random argument is provided for listVolume command, it accepts and 
> displays all the volumes.
> {noformat}
> [root@ozone-vm bin]# ./ozone oz -listVolume
> Missing argument for option: listVolumeERROR: null
> [root@ozone-vm bin]# ./ozone oz -listVolume abcdefghijk
> 2018-06-29 07:09:43,451 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
>  "owner" : {
>  "name" : "root"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "nnvolume1",
>  "createdOn" : "Sun, 18 Sep +50444 15:12:11 GMT",
>  "createdBy" : "root"
> }, {
>  "owner" : {
>  "name" : "root"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "nnvolume2",
>  "createdOn" : "Tue, 27 Sep +50444 13:05:43 GMT",
>  "createdBy" : "root"
> } ]
> {noformat}
> expectation:
> It should not accept random values as argument



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-207) ozone listVolume command accepts random values as argument

2018-06-29 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-207:
---

 Summary: ozone listVolume command accepts random values as argument
 Key: HDDS-207
 URL: https://issues.apache.org/jira/browse/HDDS-207
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Reporter: Nilotpal Nandi
 Fix For: 0.2.1


When argument from listVolume is provided, it complains.

But a random argument is provided for listVolume command, it accepts and 
displays all the volumes.
{noformat}



[root@ozone-vm bin]# ./ozone oz -listVolume
Missing argument for option: listVolumeERROR: null
[root@ozone-vm bin]# ./ozone oz -listVolume abcdefghijk
2018-06-29 07:09:43,451 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
[ {
 "owner" : {
 "name" : "root"
 },
 "quota" : {
 "unit" : "TB",
 "size" : 1048576
 },
 "volumeName" : "nnvolume1",
 "createdOn" : "Sun, 18 Sep +50444 15:12:11 GMT",
 "createdBy" : "root"
}, {
 "owner" : {
 "name" : "root"
 },
 "quota" : {
 "unit" : "TB",
 "size" : 1048576
 },
 "volumeName" : "nnvolume2",
 "createdOn" : "Tue, 27 Sep +50444 13:05:43 GMT",
 "createdBy" : "root"
} ]

{noformat}
expectation:

It should not accept random values as argument



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13710) RBF: setQuota and getQuotaUsage should check the dfs.federation.router.quota.enable

2018-06-29 Thread yanghuafeng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527470#comment-16527470
 ] 

yanghuafeng commented on HDFS-13710:


[~elgoiri] [~linyiqun]  Could you have time to review the code please?

> RBF:  setQuota and getQuotaUsage should check the 
> dfs.federation.router.quota.enable
> 
>
> Key: HDFS-13710
> URL: https://issues.apache.org/jira/browse/HDFS-13710
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, hdfs
>Affects Versions: 2.9.1, 3.0.3
>Reporter: yanghuafeng
>Priority: Major
> Attachments: HDFS-13710.patch
>
>
> when I use the command below, some exceptions happened.
>  
> {code:java}
> hdfs dfsrouteradmin -setQuota /tmp -ssQuota 1G 
> {code}
>  the logs follow.
> {code:java}
> Successfully set quota for mount point /tmp
> {code}
> It looks like the quota is set successfully, but some exceptions happen in 
> the rbf server log.
> {code:java}
> java.io.IOException: No remote locations available
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1002)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:967)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:940)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:84)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:255)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:238)
> at 
> org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocolServerSideTranslatorPB.updateMountTableEntry(RouterAdminProtocolServerSideTranslatorPB.java:179)
> at 
> org.apache.hadoop.hdfs.protocol.proto.RouterProtocolProtos$RouterAdminProtocolService$2.callBlockingMethod(RouterProtocolProtos.java:259)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> {code}
> I find the dfs.federation.router.quota.enable is false by default. And it 
> causes the problem. I think we should check the parameter when we call 
> setQuota and getQuotaUsage. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13710) RBF: setQuota and getQuotaUsage should check the dfs.federation.router.quota.enable

2018-06-29 Thread yanghuafeng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yanghuafeng updated HDFS-13710:
---
Attachment: HDFS-13710.patch
Status: Patch Available  (was: Open)

upload the patch to resolve the issue.

> RBF:  setQuota and getQuotaUsage should check the 
> dfs.federation.router.quota.enable
> 
>
> Key: HDFS-13710
> URL: https://issues.apache.org/jira/browse/HDFS-13710
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, hdfs
>Affects Versions: 3.0.3, 2.9.1
>Reporter: yanghuafeng
>Priority: Major
> Attachments: HDFS-13710.patch
>
>
> when I use the command below, some exceptions happened.
>  
> {code:java}
> hdfs dfsrouteradmin -setQuota /tmp -ssQuota 1G 
> {code}
>  the logs follow.
> {code:java}
> Successfully set quota for mount point /tmp
> {code}
> It looks like the quota is set successfully, but some exceptions happen in 
> the rbf server log.
> {code:java}
> java.io.IOException: No remote locations available
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1002)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:967)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:940)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:84)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:255)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:238)
> at 
> org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocolServerSideTranslatorPB.updateMountTableEntry(RouterAdminProtocolServerSideTranslatorPB.java:179)
> at 
> org.apache.hadoop.hdfs.protocol.proto.RouterProtocolProtos$RouterAdminProtocolService$2.callBlockingMethod(RouterProtocolProtos.java:259)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
> {code}
> I find the dfs.federation.router.quota.enable is false by default. And it 
> causes the problem. I think we should check the parameter when we call 
> setQuota and getQuotaUsage. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13709) Report bad block to NN when transfer block encounter EIO exception

2018-06-29 Thread Chen Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang updated HDFS-13709:
--
Description: 
In our online cluster, the BlockPoolSliceScanner is turned off, and sometimes 
disk bad track may cause data loss.

For example, there are 3 replicas on 3 machines A/B/C, if a bad track occurs on 
A's replica data, and someday B and C crushed at the same time, NN will try to 
replicate data from A but failed, this block is corrupt now but no one knows, 
because NN think there is at least 1 healthy replica and it keep trying to 
replicate it.

When reading a replica which have data on bad track, OS will return an EIO 
error, if DN reports the bad block as soon as it got an EIO,  we can find this 
case ASAP and try to avoid data loss

  was:
In our online cluster, the BlockPoolSliceScanner is turned off, and sometimes 
disk bad track may cause data loss.

For example, there are 3 replicas on 3 machines A/B/C, if a bad track occurs on 
A's replica data, and someday B and C crushed at the same time, NN will try to 
replicate data from A but failed, this block is corrupt now but no one knows, 
because NN think there is at least 1 healthy replica and it keep trying to 
replicate it.

When reading a replica which hav data on bad track, OS will return an EIO 
error, if DN reports the bad block as soon as it got an EIO,  we can find this 
case ASAP and try to avoid data loss


> Report bad block to NN when transfer block encounter EIO exception
> --
>
> Key: HDFS-13709
> URL: https://issues.apache.org/jira/browse/HDFS-13709
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-13709.patch
>
>
> In our online cluster, the BlockPoolSliceScanner is turned off, and sometimes 
> disk bad track may cause data loss.
> For example, there are 3 replicas on 3 machines A/B/C, if a bad track occurs 
> on A's replica data, and someday B and C crushed at the same time, NN will 
> try to replicate data from A but failed, this block is corrupt now but no one 
> knows, because NN think there is at least 1 healthy replica and it keep 
> trying to replicate it.
> When reading a replica which have data on bad track, OS will return an EIO 
> error, if DN reports the bad block as soon as it got an EIO,  we can find 
> this case ASAP and try to avoid data loss



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13709) Report bad block to NN when transfer block encounter EIO exception

2018-06-29 Thread Chen Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang updated HDFS-13709:
--
Attachment: HDFS-13709.patch

> Report bad block to NN when transfer block encounter EIO exception
> --
>
> Key: HDFS-13709
> URL: https://issues.apache.org/jira/browse/HDFS-13709
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-13709.patch
>
>
> In our online cluster, the BlockPoolSliceScanner is turned off, and sometimes 
> disk bad track may cause data loss.
> For example, there are 3 replicas on 3 machines A/B/C, if a bad track occurs 
> on A's replica data, and someday B and C crushed at the same time, NN will 
> try to replicate data from A but failed, this block is corrupt now but no one 
> knows, because NN think there is at least 1 healthy replica and it keep 
> trying to replicate it.
> When reading a replica which hav data on bad track, OS will return an EIO 
> error, if DN reports the bad block as soon as it got an EIO,  we can find 
> this case ASAP and try to avoid data loss



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13709) Report bad block to NN when transfer block encounter EIO exception

2018-06-29 Thread Chen Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang updated HDFS-13709:
--
Description: 
In our online cluster, the BlockPoolSliceScanner is turned off, and sometimes 
disk bad track may cause data loss.

For example, there are 3 replicas on 3 machines A/B/C, if a bad track occurs on 
A's replica data, and someday B and C crushed at the same time, NN will try to 
replicate data from A but failed, this block is corrupt now but no one knows, 
because NN think there is at least 1 healthy replica and it keep trying to 
replicate it.

When reading a replica which hav data on bad track, OS will return an EIO 
error, if DN reports the bad block as soon as it got an EIO,  we can find this 
case ASAP and try to avoid data loss

> Report bad block to NN when transfer block encounter EIO exception
> --
>
> Key: HDFS-13709
> URL: https://issues.apache.org/jira/browse/HDFS-13709
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
>
> In our online cluster, the BlockPoolSliceScanner is turned off, and sometimes 
> disk bad track may cause data loss.
> For example, there are 3 replicas on 3 machines A/B/C, if a bad track occurs 
> on A's replica data, and someday B and C crushed at the same time, NN will 
> try to replicate data from A but failed, this block is corrupt now but no one 
> knows, because NN think there is at least 1 healthy replica and it keep 
> trying to replicate it.
> When reading a replica which hav data on bad track, OS will return an EIO 
> error, if DN reports the bad block as soon as it got an EIO,  we can find 
> this case ASAP and try to avoid data loss



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13710) RBF: setQuota and getQuotaUsage should check the dfs.federation.router.quota.enable

2018-06-29 Thread yanghuafeng (JIRA)
yanghuafeng created HDFS-13710:
--

 Summary: RBF:  setQuota and getQuotaUsage should check the 
dfs.federation.router.quota.enable
 Key: HDFS-13710
 URL: https://issues.apache.org/jira/browse/HDFS-13710
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: federation, hdfs
Affects Versions: 3.0.3, 2.9.1
Reporter: yanghuafeng


when I use the command below, some exceptions happened.

 
{code:java}
hdfs dfsrouteradmin -setQuota /tmp -ssQuota 1G 
{code}
 the logs follow.
{code:java}
Successfully set quota for mount point /tmp
{code}
It looks like the quota is set successfully, but some exceptions happen in the 
rbf server log.
{code:java}
java.io.IOException: No remote locations available
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1002)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:967)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:940)
at org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:84)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:255)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:238)
at 
org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocolServerSideTranslatorPB.updateMountTableEntry(RouterAdminProtocolServerSideTranslatorPB.java:179)
at 
org.apache.hadoop.hdfs.protocol.proto.RouterProtocolProtos$RouterAdminProtocolService$2.callBlockingMethod(RouterProtocolProtos.java:259)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111)
{code}
I find the dfs.federation.router.quota.enable is false by default. And it 
causes the problem. I think we should check the parameter when we call setQuota 
and getQuotaUsage. 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12976) Introduce ObserverReadProxyProvider

2018-06-29 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527352#comment-16527352
 ] 

genericqa commented on HDFS-12976:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
11s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
58s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
22s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
24s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
30s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
54s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
17s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
43s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 46s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}246m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Inconsistent synchronization of 
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.currentProxyIndex;
 locked 75% of time  Unsynchronized access at 
ObserverReadProxyProvider.java:75% of time  Unsynchronized access at 
ObserverReadProxyProvider.java:[line 175] |
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd 

[jira] [Commented] (HDDS-187) Command status publisher for datanode

2018-06-29 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527349#comment-16527349
 ] 

Nanda kumar commented on HDDS-187:
--

Thaks [~ajayydv] for working on this. Please find my review comments below.

*Possible NullPointerException*

{{StateContext}} has {{cmdStatusMap}} which holds the command to it's status 
mapping.

1. Whenever we receive a command from SCM, {{HeartbeatEndpointTask}} adds an 
entry to {{cmdStatusMap}} of {{StateContext}}.
2. {{CommandHandler}} updates the {{cmdStatusMap}} once the command is executed.
3. {{CommandStatusReportPublisher}} moves the entry from 
{{StateContext#cmdStatusMap}} to {{StateContext#reports}} based on configured 
interval.
4. {{HeartbeatEndpointTask}} picks the {{CommandStatusReportsProto}} from 
{{StateContext#reports}} and sends it as part of next heartbeat.

In the above sequence step 2 and 3 are independent. If the command processing 
takes time, step 3 will be executed before 2. In this case, we will get 
NullPointerException in step 2.

*CommandHandlers*
All the {{CommandHandlers}} already have access to {{StateContext}}, which is 
passed as part of {{handle}} call. We don't need it as part of constructor.

*HeartbeatEndpointTask.java*
No need to pass {{context}} to {{addCommandStatus}}, context is already an 
instance variable.

*CloseContainerCommand.java*
We can make the new constructor private, as it should be invoked only through 
{{getFromProtobuf}}.

*DeleteBlocksCommand.java*
new constructor can be made private

*CloseContainerCommandHandler.java*
Line:21 checkstyle - line length

*DeleteBlocksCommandHandler.java*
Line:21 unused import
Line:22 checkstyle - line length

*StateContext.java*
Line:23 checkstyle - line length
Incorrect java doc for {{getCommandStatusMap}}

*RegisterEndpointTask.java*
Spurious change.

*ReportPublisherFactory.java*
Line:22 checkstyle - line length

*SCMDatanodeHeartbeatDispatcher.java*
Line:21 & 52 checkstyle - line length

*ScmTestMock.java*
Line:21 checkstyle - line length
{code:java}
if(heartbeat.hasCommandStatusReport()){
  heartbeat.getCommandStatusReport().getCmdStatusList().forEach( cmd -> {
  cmdStatusList.add(cmd);
  });
  commandStatusReport.incrementAndGet();
}
{code}
can be refactored to
{code:java}
if(heartbeat.hasCommandStatusReport()){
  cmdStatusList.addAll(heartbeat.getCommandStatusReport().getCmdStatusList());
  commandStatusReport.incrementAndGet();
}
{code}
*TestEndPoint.java*
Line: 26 - 33 checkstyle - line length

*TestReportPublisher.java*
Line: 28 - 30 checkstyle - line length

*TestSCMDatanodeHeartbeatDispatcher.java*
Line: 26 & 34 checkstyle - line length

> Command status publisher for datanode
> -
>
> Key: HDDS-187
> URL: https://issues.apache.org/jira/browse/HDDS-187
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-187.00.patch, HDDS-187.01.patch
>
>
> Currently SCM sends set of commands for DataNode. DataNode executes them via 
> CommandHandler. This jira intends to create a Command status publisher which 
> will return status of these commands back to the SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-205) Add metrics to HddsDispatcher

2018-06-29 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527343#comment-16527343
 ] 

Shashikant Banerjee commented on HDDS-205:
--

Thanks [~bharatviswa], for reporting and working on this issue. Some comments 
inline :

1) The test failures are mostly because the "metrics" was never initialized as 
dispatcher.init() call is missing in tests giving a NULL Pointer exception. The 
tests need to be fixed.

2) It would be better to not use * imports.(Not related to the patch)

KeyValueHandler.java: 73

3) Please fix the findBug Issues.

 

> Add metrics to HddsDispatcher
> -
>
> Key: HDDS-205
> URL: https://issues.apache.org/jira/browse/HDDS-205
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-205-HDDS-48.00.patch, HDDS-205-HDDS-48.01.patch
>
>
> This patch adds metrics to newly added HddsDispatcher.
> This uses, already existing ContainerMetrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-206) default port number taken by ksm is 9862 while listing the volumes

2018-06-29 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee reassigned HDDS-206:


Assignee: Shashikant Banerjee

> default port number taken by ksm is 9862 while listing the volumes
> --
>
> Key: HDDS-206
> URL: https://issues.apache.org/jira/browse/HDDS-206
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
>
> Here is the output of ozone -listVolume command without any port mentioned .
> By default, it chooses the port number as 9862 which is not mentioned in the 
> ozone-site.xml
> {noformat}
> [root@ozone-vm bin]# ./ozone oz -listVolume o3://127.0.0.1/
> 2018-06-29 04:42:20,652 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2018-06-29 04:42:21,914 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:22,915 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 1 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:23,917 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 2 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:24,925 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 3 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:25,928 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 4 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:26,931 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 5 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:27,932 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 6 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:28,934 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 7 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:29,935 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 8 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:30,938 INFO ipc.Client: Retrying connect to server: 
> localhost/127.0.0.1:9862. Already tried 9 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2018-06-29 04:42:31,075 [main] ERROR - Couldn't create protocol class 
> org.apache.hadoop.ozone.client.rpc.RpcClient exception:
> java.lang.reflect.InvocationTargetException
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:292)
>  at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:172)
>  at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:156)
>  at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:111)
>  at org.apache.hadoop.ozone.web.ozShell.Handler.verifyURI(Handler.java:96)
>  at 
> org.apache.hadoop.ozone.web.ozShell.volume.ListVolumeHandler.execute(ListVolumeHandler.java:80)
>  at org.apache.hadoop.ozone.web.ozShell.Shell.dispatch(Shell.java:395)
>  at org.apache.hadoop.ozone.web.ozShell.Shell.run(Shell.java:135)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>  at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:114)
> Caused by: java.net.ConnectException: Call From ozone-vm/10.200.5.166 to 
> localhost:9862 failed on 

[jira] [Created] (HDFS-13709) Report bad block to NN when transfer block encounter EIO exception

2018-06-29 Thread Chen Zhang (JIRA)
Chen Zhang created HDFS-13709:
-

 Summary: Report bad block to NN when transfer block encounter EIO 
exception
 Key: HDFS-13709
 URL: https://issues.apache.org/jira/browse/HDFS-13709
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Chen Zhang
Assignee: Chen Zhang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-206) default port number taken by ksm is 9862 while listing the volumes

2018-06-29 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-206:
---

 Summary: default port number taken by ksm is 9862 while listing 
the volumes
 Key: HDDS-206
 URL: https://issues.apache.org/jira/browse/HDDS-206
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Reporter: Nilotpal Nandi
 Fix For: 0.2.1


Here is the output of ozone -listVolume command without any port mentioned .

By default, it chooses the port number as 9862 which is not mentioned in the 
ozone-site.xml
{noformat}
[root@ozone-vm bin]# ./ozone oz -listVolume o3://127.0.0.1/
2018-06-29 04:42:20,652 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
2018-06-29 04:42:21,914 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9862. Already tried 0 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2018-06-29 04:42:22,915 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9862. Already tried 1 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2018-06-29 04:42:23,917 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9862. Already tried 2 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2018-06-29 04:42:24,925 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9862. Already tried 3 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2018-06-29 04:42:25,928 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9862. Already tried 4 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2018-06-29 04:42:26,931 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9862. Already tried 5 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2018-06-29 04:42:27,932 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9862. Already tried 6 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2018-06-29 04:42:28,934 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9862. Already tried 7 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2018-06-29 04:42:29,935 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9862. Already tried 8 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2018-06-29 04:42:30,938 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9862. Already tried 9 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2018-06-29 04:42:31,075 [main] ERROR - Couldn't create protocol class 
org.apache.hadoop.ozone.client.rpc.RpcClient exception:
java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
 at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:292)
 at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:172)
 at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:156)
 at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:111)
 at org.apache.hadoop.ozone.web.ozShell.Handler.verifyURI(Handler.java:96)
 at 
org.apache.hadoop.ozone.web.ozShell.volume.ListVolumeHandler.execute(ListVolumeHandler.java:80)
 at org.apache.hadoop.ozone.web.ozShell.Shell.dispatch(Shell.java:395)
 at org.apache.hadoop.ozone.web.ozShell.Shell.run(Shell.java:135)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
 at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:114)
Caused by: java.net.ConnectException: Call From ozone-vm/10.200.5.166 to 
localhost:9862 failed on connection exception: java.net.ConnectException: 
Connection refused; For more details see: 
http://wiki.apache.org/hadoop/ConnectionRefused
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at 

[jira] [Commented] (HDDS-59) Ozone client should update blocksize in OM for sub-block writes

2018-06-29 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-59?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527312#comment-16527312
 ] 

Shashikant Banerjee commented on HDDS-59:
-

Thanks [~msingh], for reporting and working on this. The patch doesn't apply to 
trunk anymore. can you please rebase?

 

> Ozone client should update blocksize in OM for sub-block writes
> ---
>
> Key: HDDS-59
> URL: https://issues.apache.org/jira/browse/HDDS-59
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-59.001.patch
>
>
> Currently ozone client allocates block of the required length from SCM 
> through KSM.
> However it might happen due to error cases or because of small writes that 
> the allocated block is not completely written.
> In these cases, client should update the KSM with the length of the block. 
> This will help in error cases as well as cases where client does not write 
> the complete block to Ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-197) DataNode should return ContainerClosingException/ContainerClosedException (CCE) to client if the container is in Closing/Closed State

2018-06-29 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-197:
-
Summary: DataNode should return 
ContainerClosingException/ContainerClosedException (CCE) to client if the 
container is in Closing/Closed State  (was: DataNode should return Datanode to 
return ContainerClosingException/ContainerClosedException (CCE) to client if 
the container is in Closing/Closed State)

> DataNode should return ContainerClosingException/ContainerClosedException 
> (CCE) to client if the container is in Closing/Closed State
> -
>
> Key: HDDS-197
> URL: https://issues.apache.org/jira/browse/HDDS-197
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
>
> SCM queues the CloeContainer command to DataNode over herabeat response which 
> is handled by the Ratis Server inside the Datanode. In Case, the container 
> transitions to CLOSING/CLOSED state, while the ozone client is writing Data, 
> It should throw 
> ContainerClosingException/ContainerClosedExceptionContainerClosingException/ContainerClosedException
>  accordingly. These exceptions will be handled by the client which will retry 
> to get the last committed BlockInfo from Datanode and update the OzoneMaster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-205) Add metrics to HddsDispatcher

2018-06-29 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527311#comment-16527311
 ] 

genericqa commented on HDDS-205:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m 
54s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 9s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
20s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
52s{color} | {color:red} hadoop-hdds/container-service in HDDS-48 has 6 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
59s{color} | {color:red} hadoop-hdds/container-service generated 1 new + 6 
unchanged - 0 fixed = 7 total (was 6) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 21s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m  6s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}140m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/container-service |
|  |  Possible null pointer dereference of data in 
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleWriteChunk(ContainerProtos$ContainerCommandRequestProto,
 KeyValueContainer)  Dereferenced at KeyValueHandler.java:data in 

[jira] [Commented] (HDDS-175) Refactor ContainerInfo to remove Pipeline object from it

2018-06-29 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527287#comment-16527287
 ] 

genericqa commented on HDDS-175:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 17 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 27m 
44s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 27m 44s{color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 27m 44s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
35s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
16s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
25s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} 

[jira] [Updated] (HDFS-13610) [Edit Tail Fast Path Pt 4] Cleanup: integration test, documentation, remove unnecessary dummy sync

2018-06-29 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13610:
---
Description: 
See HDFS-13150 for full design.

This JIRA is targeted at cleanup tasks:
* Add in integration testing. We can expand {{TestStandbyInProgressTail}}
* Documentation in HDFSHighAvailabilityWithQJM
* Remove the dummy sync added as part of HDFS-10519; it is unnecessary since 
now in-progress tailing does not rely on the JN committedTxnId

A few bugs are also fixed:
* Due to changes in HDFS-13609 to enable use of the RPC mechanism whenever 
inProgressOK is true, there were codepaths which would use the RPC mechanism 
even when dfs.ha.tail-edits.in-progress was false, meaning that the JNs did not 
enable the cache. Update the QJM logic to only use selectRpcInputStreams if 
this config is true.
* Fix a false error logged when the layout version changes
* Fix the logging when a layout version change occurs to avoid printing out a 
placeholder value (Integer.MAX_VALUE)


  was:
See HDFS-13150 for full design.

This JIRA is targeted at cleanup tasks:
* Add in integration testing. We can expand {{TestStandbyInProgressTail}}
* Documentation in HDFSHighAvailabilityWithQJM
* Remove the dummy sync added as part of HDFS-10519; it is unnecessary since 
now in-progress tailing does not rely on the JN committedTxnId

Two bugs are also fixed:
* Unnece


> [Edit Tail Fast Path Pt 4] Cleanup: integration test, documentation, remove 
> unnecessary dummy sync
> --
>
> Key: HDFS-13610
> URL: https://issues.apache.org/jira/browse/HDFS-13610
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, journal-node, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13610-HDFS-12943.000.patch, 
> HDFS-13610-HDFS-12943.001.patch, HDFS-13610-HDFS-12943.002.patch, 
> HDFS-13610-HDFS-12943.003.patch
>
>
> See HDFS-13150 for full design.
> This JIRA is targeted at cleanup tasks:
> * Add in integration testing. We can expand {{TestStandbyInProgressTail}}
> * Documentation in HDFSHighAvailabilityWithQJM
> * Remove the dummy sync added as part of HDFS-10519; it is unnecessary since 
> now in-progress tailing does not rely on the JN committedTxnId
> A few bugs are also fixed:
> * Due to changes in HDFS-13609 to enable use of the RPC mechanism whenever 
> inProgressOK is true, there were codepaths which would use the RPC mechanism 
> even when dfs.ha.tail-edits.in-progress was false, meaning that the JNs did 
> not enable the cache. Update the QJM logic to only use selectRpcInputStreams 
> if this config is true.
> * Fix a false error logged when the layout version changes
> * Fix the logging when a layout version change occurs to avoid printing out a 
> placeholder value (Integer.MAX_VALUE)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13610) [Edit Tail Fast Path Pt 4] Cleanup: integration test, documentation, remove unnecessary dummy sync

2018-06-29 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13610:
---
Description: 
See HDFS-13150 for full design.

This JIRA is targeted at cleanup tasks:
* Add in integration testing. We can expand {{TestStandbyInProgressTail}}
* Documentation in HDFSHighAvailabilityWithQJM
* Remove the dummy sync added as part of HDFS-10519; it is unnecessary since 
now in-progress tailing does not rely on the JN committedTxnId

Two bugs are also fixed:
* Unnece

  was:
See HDFS-13150 for full design.

This JIRA is targeted at cleanup tasks:
* Add in integration testing. We can expand {{TestStandbyInProgressTail}}
* Documentation in HDFSHighAvailabilityWithQJM
* Remove the dummy sync added as part of HDFS-10519; it is unnecessary since 
now in-progress tailing does not rely as heavily on the JN committedTxnId


> [Edit Tail Fast Path Pt 4] Cleanup: integration test, documentation, remove 
> unnecessary dummy sync
> --
>
> Key: HDFS-13610
> URL: https://issues.apache.org/jira/browse/HDFS-13610
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, journal-node, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13610-HDFS-12943.000.patch, 
> HDFS-13610-HDFS-12943.001.patch, HDFS-13610-HDFS-12943.002.patch, 
> HDFS-13610-HDFS-12943.003.patch
>
>
> See HDFS-13150 for full design.
> This JIRA is targeted at cleanup tasks:
> * Add in integration testing. We can expand {{TestStandbyInProgressTail}}
> * Documentation in HDFSHighAvailabilityWithQJM
> * Remove the dummy sync added as part of HDFS-10519; it is unnecessary since 
> now in-progress tailing does not rely on the JN committedTxnId
> Two bugs are also fixed:
> * Unnece



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13665) Move RPC response serialization into Server.doResponse

2018-06-29 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527256#comment-16527256
 ] 

Erik Krogen commented on HDFS-13665:


(1) Sounds good, I gathered the same insight but wanted a second pair of eyes. 
Seems we are fine here.
(2) Thanks!
(3) Sounds fine to me

> Move RPC response serialization into Server.doResponse
> --
>
> Key: HDFS-13665
> URL: https://issues.apache.org/jira/browse/HDFS-13665
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13665-HDFS-12943.000.patch, 
> HDFS-13665-HDFS-12943.001.patch
>
>
> In HDFS-13399 we addressed a race condition in AlignmentContext processing 
> where the RPC response would assign a transactionId independently of the 
> transactions own processing, resulting in a stateId response that was lower 
> than expected. However this caused us to serialize the RpcResponse twice in 
> order to address the header field change.
> See here:
> https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16464279=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16464279
> And here:
> https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16498660=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16498660
> At the end it was agreed upon to move the logic of Server.setupResponse into 
> Server.doResponse directly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-187) Command status publisher for datanode

2018-06-29 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527236#comment-16527236
 ] 

genericqa commented on HDDS-187:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
57s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
10s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-187 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929673/HDDS-187.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  cc  |
| uname | Linux 860be9ab5637 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64