[jira] [Commented] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2019-08-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900641#comment-16900641
 ] 

Hadoop QA commented on HDFS-13596:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 796 unchanged - 1 fixed = 796 total (was 797) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}162m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-13596 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976773/HDFS-13596.009.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 607a33003237 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d6697da |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27411/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27411/testReport/ |
| Max. process+thread count | 2837 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console 

[jira] [Work logged] (HDDS-1900) Remove UpdateBucket handler which supports add/remove Acl

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1900?focusedWorklogId=289427=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289427
 ]

ASF GitHub Bot logged work on HDDS-1900:


Author: ASF GitHub Bot
Created on: 06/Aug/19 04:44
Start Date: 06/Aug/19 04:44
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1219: HDDS-1900. 
Remove UpdateBucket handler which supports add/remove Acl.
URL: https://github.com/apache/hadoop/pull/1219#issuecomment-518496476
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 81 | Maven dependency ordering for branch |
   | +1 | mvninstall | 636 | trunk passed |
   | +1 | compile | 373 | trunk passed |
   | +1 | checkstyle | 78 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 881 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 171 | trunk passed |
   | 0 | spotbugs | 424 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 619 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 36 | Maven dependency ordering for patch |
   | +1 | mvninstall | 564 | the patch passed |
   | +1 | compile | 381 | the patch passed |
   | +1 | javac | 381 | the patch passed |
   | +1 | checkstyle | 88 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 670 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | the patch passed |
   | +1 | findbugs | 631 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 287 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2450 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 56 | The patch does not generate ASF License warnings. |
   | | | 8396 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1219/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1219 |
   | Optional Tests | dupname asflicense mvnsite compile javac javadoc 
mvninstall unit shadedclient findbugs checkstyle |
   | uname | Linux c8623e83aacd 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d6697da |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1219/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1219/3/testReport/ |
   | Max. process+thread count | 4624 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/docs hadoop-ozone/dist 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1219/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289427)
Time Spent: 40m  (was: 0.5h)

> Remove UpdateBucket handler which supports add/remove Acl
> -
>
> Key: HDDS-1900
> URL: 

[jira] [Commented] (HDFS-14652) HealthMonitor connection retry times should be configurable

2019-08-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900589#comment-16900589
 ] 

Hadoop QA commented on HDFS-14652:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-14652 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14652 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976780/HDFS-14652.003.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27412/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> HealthMonitor connection retry times should be configurable
> ---
>
> Key: HDFS-14652
> URL: https://issues.apache.org/jira/browse/HDFS-14652
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14652-001.patch, HDFS-14652-002.patch, 
> HDFS-14652.003.patch
>
>
> On our production HDFS cluster, some client's burst requests cause the tcp 
> kernel queue full on NameNode's host,  since the configuration value of 
> "net.ipv4.tcp_syn_retries" in our environment is 1, so after 3 seconds, the 
> ZooKeeper Healthmonitor got an connection error like this:
> {code:java}
> WARN org.apache.hadoop.ha.HealthMonitor: Transport-level exception trying to 
> monitor health of NameNode at nn_host_name/ip_address:port: Call From 
> zkfc_host_name/ip to nn_host_name:port failed on connection exception: 
> java.net.ConnectException: Connection timed out; For more details see: 
> http://wiki.apache.org/hadoop/ConnectionRefused
> {code}
> This error caused a failover and affects the availability of that cluster, we 
> fixed this issue by enlarge the kernel parameter net.ipv4.tcp_syn_retries to 6
> But during working on this issue, we found that the connection retry 
> time(ipc.client.connect.max.retries) of health-monitor is hard coded as 1, I 
> think it should be configurable, then if we don't want the health-monitor so 
> sensitive, we can change it's behavior by change this configuration



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14652) HealthMonitor connection retry times should be configurable

2019-08-05 Thread Chen Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900586#comment-16900586
 ] 

Chen Zhang commented on HDFS-14652:
---

uploaded patch v3

Hi [~jojochuang], I uploaded a full patch, if you only need the additional 
part, I'll re-submit a new one, thanks.

> HealthMonitor connection retry times should be configurable
> ---
>
> Key: HDFS-14652
> URL: https://issues.apache.org/jira/browse/HDFS-14652
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14652-001.patch, HDFS-14652-002.patch, 
> HDFS-14652.003.patch
>
>
> On our production HDFS cluster, some client's burst requests cause the tcp 
> kernel queue full on NameNode's host,  since the configuration value of 
> "net.ipv4.tcp_syn_retries" in our environment is 1, so after 3 seconds, the 
> ZooKeeper Healthmonitor got an connection error like this:
> {code:java}
> WARN org.apache.hadoop.ha.HealthMonitor: Transport-level exception trying to 
> monitor health of NameNode at nn_host_name/ip_address:port: Call From 
> zkfc_host_name/ip to nn_host_name:port failed on connection exception: 
> java.net.ConnectException: Connection timed out; For more details see: 
> http://wiki.apache.org/hadoop/ConnectionRefused
> {code}
> This error caused a failover and affects the availability of that cluster, we 
> fixed this issue by enlarge the kernel parameter net.ipv4.tcp_syn_retries to 6
> But during working on this issue, we found that the connection retry 
> time(ipc.client.connect.max.retries) of health-monitor is hard coded as 1, I 
> think it should be configurable, then if we don't want the health-monitor so 
> sensitive, we can change it's behavior by change this configuration



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14702) Datanode.ReplicaMap memory leak

2019-08-05 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900585#comment-16900585
 ] 

He Xiaoqiao commented on HDFS-14702:


[~crh], Thanks for your feedback, just notice that HDFS-8859 try to improve 
memory footprint but not backport. `ReplicaMap` information as following,
||Type||Name||Value||
|ref|entrySet|null|
|int|hashSeed|0|
|int|modCount|1261173816|
|float|loadFactor|0.75|
|int|threshold|6291456|
|int|size|4093978|
|ref|table|java.util.HashMap$Entry[8388608] @ 0x78cb249f0|
|ref|values|java.util.HashMap$Values @ 0x778155350|
|ref|keySet|null|
!datanode.dump.png!

> Datanode.ReplicaMap memory leak
> ---
>
> Key: HDFS-14702
> URL: https://issues.apache.org/jira/browse/HDFS-14702
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: He Xiaoqiao
>Priority: Major
> Attachments: datanode.dump.png
>
>
> DataNode memory is occupied by ReplicaMaps and cause GC high frequency then 
> write performance degrade.
> It is about 600K block replicas located at DataNode, but when dump heap, 
> there are over 8M items of ReplicaMaps and footprint over 500MB. It seems 
> that memory leak. One more situation, the block w/r ops is very high.
> Do not test HDFS-8859 and no idea if it can solve this issue.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14652) HealthMonitor connection retry times should be configurable

2019-08-05 Thread Chen Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang updated HDFS-14652:
--
Attachment: HDFS-14652.003.patch

> HealthMonitor connection retry times should be configurable
> ---
>
> Key: HDFS-14652
> URL: https://issues.apache.org/jira/browse/HDFS-14652
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14652-001.patch, HDFS-14652-002.patch, 
> HDFS-14652.003.patch
>
>
> On our production HDFS cluster, some client's burst requests cause the tcp 
> kernel queue full on NameNode's host,  since the configuration value of 
> "net.ipv4.tcp_syn_retries" in our environment is 1, so after 3 seconds, the 
> ZooKeeper Healthmonitor got an connection error like this:
> {code:java}
> WARN org.apache.hadoop.ha.HealthMonitor: Transport-level exception trying to 
> monitor health of NameNode at nn_host_name/ip_address:port: Call From 
> zkfc_host_name/ip to nn_host_name:port failed on connection exception: 
> java.net.ConnectException: Connection timed out; For more details see: 
> http://wiki.apache.org/hadoop/ConnectionRefused
> {code}
> This error caused a failover and affects the availability of that cluster, we 
> fixed this issue by enlarge the kernel parameter net.ipv4.tcp_syn_retries to 6
> But during working on this issue, we found that the connection retry 
> time(ipc.client.connect.max.retries) of health-monitor is hard coded as 1, I 
> think it should be configurable, then if we don't want the health-monitor so 
> sensitive, we can change it's behavior by change this configuration



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14652) HealthMonitor connection retry times should be configurable

2019-08-05 Thread Chen Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang updated HDFS-14652:
--
Status: Patch Available  (was: Reopened)

> HealthMonitor connection retry times should be configurable
> ---
>
> Key: HDFS-14652
> URL: https://issues.apache.org/jira/browse/HDFS-14652
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14652-001.patch, HDFS-14652-002.patch, 
> HDFS-14652.003.patch
>
>
> On our production HDFS cluster, some client's burst requests cause the tcp 
> kernel queue full on NameNode's host,  since the configuration value of 
> "net.ipv4.tcp_syn_retries" in our environment is 1, so after 3 seconds, the 
> ZooKeeper Healthmonitor got an connection error like this:
> {code:java}
> WARN org.apache.hadoop.ha.HealthMonitor: Transport-level exception trying to 
> monitor health of NameNode at nn_host_name/ip_address:port: Call From 
> zkfc_host_name/ip to nn_host_name:port failed on connection exception: 
> java.net.ConnectException: Connection timed out; For more details see: 
> http://wiki.apache.org/hadoop/ConnectionRefused
> {code}
> This error caused a failover and affects the availability of that cluster, we 
> fixed this issue by enlarge the kernel parameter net.ipv4.tcp_syn_retries to 6
> But during working on this issue, we found that the connection retry 
> time(ipc.client.connect.max.retries) of health-monitor is hard coded as 1, I 
> think it should be configurable, then if we don't want the health-monitor so 
> sensitive, we can change it's behavior by change this configuration



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=289409=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289409
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 06/Aug/19 03:39
Start Date: 06/Aug/19 03:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-518484922
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 65 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 95 | Maven dependency ordering for branch |
   | +1 | mvninstall | 657 | trunk passed |
   | +1 | compile | 376 | trunk passed |
   | +1 | checkstyle | 66 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 842 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 157 | trunk passed |
   | 0 | spotbugs | 454 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 656 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for patch |
   | +1 | mvninstall | 570 | the patch passed |
   | +1 | compile | 374 | the patch passed |
   | +1 | javac | 374 | the patch passed |
   | +1 | checkstyle | 67 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 663 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 155 | the patch passed |
   | +1 | findbugs | 694 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 329 | hadoop-hdds in the patch passed. |
   | -1 | unit | 252 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 6253 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.request.volume.TestOMVolumeCreateRequest |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeDeleteRequest |
   |   | hadoop.ozone.om.response.bucket.TestOMBucketSetPropertyResponse |
   |   | hadoop.ozone.om.request.s3.bucket.TestS3BucketCreateRequest |
   |   | hadoop.ozone.om.request.s3.multipart.TestS3MultipartUploadAbortRequest 
|
   |   | 
hadoop.ozone.om.request.s3.multipart.TestS3MultipartUploadCompleteRequest |
   |   | hadoop.ozone.om.request.file.TestOMFileCreateRequest |
   |   | hadoop.ozone.om.response.volume.TestOMVolumeSetOwnerResponse |
   |   | hadoop.ozone.om.response.bucket.TestOMBucketCreateResponse |
   |   | hadoop.ozone.om.request.key.TestOMKeyCommitRequest |
   |   | hadoop.ozone.om.response.volume.TestOMVolumeCreateResponse |
   |   | hadoop.ozone.om.request.bucket.TestOMBucketCreateRequest |
   |   | hadoop.ozone.om.TestBucketManagerImpl |
   |   | hadoop.ozone.om.TestKeyDeletingService |
   |   | hadoop.ozone.om.request.key.TestOMAllocateBlockRequest |
   |   | 
hadoop.ozone.om.request.s3.multipart.TestS3MultipartUploadCommitPartRequest |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeSetQuotaRequest |
   |   | hadoop.ozone.om.request.key.TestOMKeyCreateRequest |
   |   | hadoop.ozone.om.response.s3.bucket.TestS3BucketCreateResponse |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeSetOwnerRequest |
   |   | 
hadoop.ozone.om.request.s3.multipart.TestS3InitiateMultipartUploadRequest |
   |   | hadoop.ozone.om.TestS3BucketManager |
   |   | hadoop.ozone.om.request.file.TestOMDirectoryCreateRequest |
   |   | hadoop.ozone.om.request.bucket.TestOMBucketSetPropertyRequest |
   |   | hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse |
   |   | hadoop.ozone.om.response.volume.TestOMVolumeSetQuotaResponse |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3a9afef664cc 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git 

[jira] [Updated] (HDFS-14702) Datanode.ReplicaMap memory leak

2019-08-05 Thread He Xiaoqiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HDFS-14702:
---
Attachment: datanode.dump.png

> Datanode.ReplicaMap memory leak
> ---
>
> Key: HDFS-14702
> URL: https://issues.apache.org/jira/browse/HDFS-14702
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: He Xiaoqiao
>Priority: Major
> Attachments: datanode.dump.png
>
>
> DataNode memory is occupied by ReplicaMaps and cause GC high frequency then 
> write performance degrade.
> It is about 600K block replicas located at DataNode, but when dump heap, 
> there are over 8M items of ReplicaMaps and footprint over 500MB. It seems 
> that memory leak. One more situation, the block w/r ops is very high.
> Do not test HDFS-8859 and no idea if it can solve this issue.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14679) Failed to add erasure code policies with example template

2019-08-05 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14679:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> Failed to add erasure code policies with example template
> -
>
> Key: HDFS-14679
> URL: https://issues.apache.org/jira/browse/HDFS-14679
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.1.2
>Reporter: Yuan Zhou
>Assignee: Yuan Zhou
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14679-01.patch, HDFS-14679-02.patch, 
> HDFS-14679-03.patch, fix_adding_EC_policy_example.diff
>
>
> Hi Hadoop developers,
>  
> Trying to do some quick tests with erasure coding feature and ran into a 
> issue on adding policies. The example on adding erasure code policies with 
> provided template failed:
> {quote}./bin/hdfs ec -addPolicies -policyFile 
> /tmp/user_ec_policies.xml.template
>  2019-07-30 10:35:16,447 INFO util.ECPolicyLoader: Loading EC policy file 
> /tmp/user_ec_policies.xml.template
>  Add ErasureCodingPolicy XOR-2-1-128k succeed.
>  Add ErasureCodingPolicy RS-LEGACY-12-4-256k failed and error message is 
> Codec name RS-legacy is not supported
> {quote}
> The issue seems due to be the mismatching codec(upper case vs lower case). 
> The codec is in upper case in the example template[1] while all available 
> codecs are lower case[2]. A way to fix maybe just converting the codec to 
> lower case when parsing the policy schema. Also attached a simple patch here. 
> [1] 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/conf/user_ec_policies.xml.template#L51]
> [2][https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeConstants.java#L28-L33]
> Thanks, -yuan



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14679) Failed to add erasure code policies with example template

2019-08-05 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900578#comment-16900578
 ] 

Hudson commented on HDFS-14679:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17043 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17043/])
HDFS-14679. Failed to add erasure code policies with example template. 
(ayushsaxena: rev 11272159bb7308b854319b73a76d60f2ec5bfe23)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/conf/user_ec_policies.xml.template


> Failed to add erasure code policies with example template
> -
>
> Key: HDFS-14679
> URL: https://issues.apache.org/jira/browse/HDFS-14679
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.1.2
>Reporter: Yuan Zhou
>Assignee: Yuan Zhou
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14679-01.patch, HDFS-14679-02.patch, 
> HDFS-14679-03.patch, fix_adding_EC_policy_example.diff
>
>
> Hi Hadoop developers,
>  
> Trying to do some quick tests with erasure coding feature and ran into a 
> issue on adding policies. The example on adding erasure code policies with 
> provided template failed:
> {quote}./bin/hdfs ec -addPolicies -policyFile 
> /tmp/user_ec_policies.xml.template
>  2019-07-30 10:35:16,447 INFO util.ECPolicyLoader: Loading EC policy file 
> /tmp/user_ec_policies.xml.template
>  Add ErasureCodingPolicy XOR-2-1-128k succeed.
>  Add ErasureCodingPolicy RS-LEGACY-12-4-256k failed and error message is 
> Codec name RS-legacy is not supported
> {quote}
> The issue seems due to be the mismatching codec(upper case vs lower case). 
> The codec is in upper case in the example template[1] while all available 
> codecs are lower case[2]. A way to fix maybe just converting the codec to 
> lower case when parsing the policy schema. Also attached a simple patch here. 
> [1] 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/conf/user_ec_policies.xml.template#L51]
> [2][https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeConstants.java#L28-L33]
> Thanks, -yuan



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14679) Failed to add erasure code policies with example template

2019-08-05 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900576#comment-16900576
 ] 

Ayush Saxena commented on HDFS-14679:
-

Committed to trunk.
Thanx [~yuanzhou] for the contribution!!!

> Failed to add erasure code policies with example template
> -
>
> Key: HDFS-14679
> URL: https://issues.apache.org/jira/browse/HDFS-14679
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.1.2
>Reporter: Yuan Zhou
>Assignee: Yuan Zhou
>Priority: Minor
> Attachments: HDFS-14679-01.patch, HDFS-14679-02.patch, 
> HDFS-14679-03.patch, fix_adding_EC_policy_example.diff
>
>
> Hi Hadoop developers,
>  
> Trying to do some quick tests with erasure coding feature and ran into a 
> issue on adding policies. The example on adding erasure code policies with 
> provided template failed:
> {quote}./bin/hdfs ec -addPolicies -policyFile 
> /tmp/user_ec_policies.xml.template
>  2019-07-30 10:35:16,447 INFO util.ECPolicyLoader: Loading EC policy file 
> /tmp/user_ec_policies.xml.template
>  Add ErasureCodingPolicy XOR-2-1-128k succeed.
>  Add ErasureCodingPolicy RS-LEGACY-12-4-256k failed and error message is 
> Codec name RS-legacy is not supported
> {quote}
> The issue seems due to be the mismatching codec(upper case vs lower case). 
> The codec is in upper case in the example template[1] while all available 
> codecs are lower case[2]. A way to fix maybe just converting the codec to 
> lower case when parsing the policy schema. Also attached a simple patch here. 
> [1] 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/conf/user_ec_policies.xml.template#L51]
> [2][https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeConstants.java#L28-L33]
> Thanks, -yuan



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14679) Failed to add erasure code policies with example template

2019-08-05 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14679:

Summary: Failed to add erasure code policies with example template  (was: 
failed to add erasure code policies with example template)

> Failed to add erasure code policies with example template
> -
>
> Key: HDFS-14679
> URL: https://issues.apache.org/jira/browse/HDFS-14679
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.1.2
>Reporter: Yuan Zhou
>Assignee: Yuan Zhou
>Priority: Minor
> Attachments: HDFS-14679-01.patch, HDFS-14679-02.patch, 
> HDFS-14679-03.patch, fix_adding_EC_policy_example.diff
>
>
> Hi Hadoop developers,
>  
> Trying to do some quick tests with erasure coding feature and ran into a 
> issue on adding policies. The example on adding erasure code policies with 
> provided template failed:
> {quote}./bin/hdfs ec -addPolicies -policyFile 
> /tmp/user_ec_policies.xml.template
>  2019-07-30 10:35:16,447 INFO util.ECPolicyLoader: Loading EC policy file 
> /tmp/user_ec_policies.xml.template
>  Add ErasureCodingPolicy XOR-2-1-128k succeed.
>  Add ErasureCodingPolicy RS-LEGACY-12-4-256k failed and error message is 
> Codec name RS-legacy is not supported
> {quote}
> The issue seems due to be the mismatching codec(upper case vs lower case). 
> The codec is in upper case in the example template[1] while all available 
> codecs are lower case[2]. A way to fix maybe just converting the codec to 
> lower case when parsing the policy schema. Also attached a simple patch here. 
> [1] 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/conf/user_ec_policies.xml.template#L51]
> [2][https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeConstants.java#L28-L33]
> Thanks, -yuan



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-14652) HealthMonitor connection retry times should be configurable

2019-08-05 Thread Chen Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang reopened HDFS-14652:
---

missed properties in core-default.xml

> HealthMonitor connection retry times should be configurable
> ---
>
> Key: HDFS-14652
> URL: https://issues.apache.org/jira/browse/HDFS-14652
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14652-001.patch, HDFS-14652-002.patch
>
>
> On our production HDFS cluster, some client's burst requests cause the tcp 
> kernel queue full on NameNode's host,  since the configuration value of 
> "net.ipv4.tcp_syn_retries" in our environment is 1, so after 3 seconds, the 
> ZooKeeper Healthmonitor got an connection error like this:
> {code:java}
> WARN org.apache.hadoop.ha.HealthMonitor: Transport-level exception trying to 
> monitor health of NameNode at nn_host_name/ip_address:port: Call From 
> zkfc_host_name/ip to nn_host_name:port failed on connection exception: 
> java.net.ConnectException: Connection timed out; For more details see: 
> http://wiki.apache.org/hadoop/ConnectionRefused
> {code}
> This error caused a failover and affects the availability of that cluster, we 
> fixed this issue by enlarge the kernel parameter net.ipv4.tcp_syn_retries to 6
> But during working on this issue, we found that the connection retry 
> time(ipc.client.connect.max.retries) of health-monitor is hard coded as 1, I 
> think it should be configurable, then if we don't want the health-monitor so 
> sensitive, we can change it's behavior by change this configuration



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14313) Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory instead of df/du

2019-08-05 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900563#comment-16900563
 ] 

Lisheng Sun commented on HDFS-14313:


Thank [~jojochuang] for your suggestion.
  
{quote}One thing that's missing out in the v11 is the update to 
hdfs-default.xml which was missing since v8.
{quote}

 According to [~linyiqun] suggetion, I define a hard-coded threadold time value 
like 1000ms in ReplicaCachingGetSpaceUsed. So remove config in hdfs-default.xml.
{code:java}
private static final long DEEP_COPY_REPLICA_THRESHOLD_MS = 50;
private static final long REPLICA_CACHING_GET_SPACE_USED_THRESHOLD_MS = 1000;
{code}

{quote}
Additionally there should be an additional config key 
"fs.getspaceused.classname" in core-default.xml, and state that possible 
options are

org.apache.hadoop.fs.DU (default)
org.apache.hadoop.fs.WindowsGetSpaceUsed
org.apache.hadoop.fs.DFCachingGetSpaceUsed
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed
{quote}

That ReplicaCachingGetSpaceUsed of hdfs module is added in core-default.xml of 
common module should not be very good. Please correct me if I was wrong. Thank 
you again.

> Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory  
> instead of df/du
> 
>
> Key: HDFS-14313
> URL: https://issues.apache.org/jira/browse/HDFS-14313
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, performance
>Affects Versions: 2.6.0, 2.7.0, 2.8.0, 2.9.0, 3.0.0, 3.1.0
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14313.000.patch, HDFS-14313.001.patch, 
> HDFS-14313.002.patch, HDFS-14313.003.patch, HDFS-14313.004.patch, 
> HDFS-14313.005.patch, HDFS-14313.006.patch, HDFS-14313.007.patch, 
> HDFS-14313.008.patch, HDFS-14313.009.patch, HDFS-14313.010.patch, 
> HDFS-14313.011.patch
>
>
> There are two ways of DU/DF getting used space that are insufficient.
>  #  Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike.
>  #  Running DF is inaccurate when the disk sharing by multiple datanode or 
> other servers.
>  Getting hdfs used space from  FsDatasetImpl#volumeMap#ReplicaInfos in memory 
> is very small and accurate. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14313) Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory instead of df/du

2019-08-05 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900561#comment-16900561
 ] 

Yiqun Lin commented on HDFS-14313:
--

{quote}
One thing that's missing out in the v11 is the update to hdfs-default.xml which 
was missing since v8.
{quote}
Not fully get this point. The print time threshold key is no longer needed now. 
Does this mean we need to add fs.getspaceused.classname key in hdfs-default as 
well?

Hi [~leosun08], can you additionally address the [~jojochuang]'s comment?

> Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory  
> instead of df/du
> 
>
> Key: HDFS-14313
> URL: https://issues.apache.org/jira/browse/HDFS-14313
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, performance
>Affects Versions: 2.6.0, 2.7.0, 2.8.0, 2.9.0, 3.0.0, 3.1.0
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14313.000.patch, HDFS-14313.001.patch, 
> HDFS-14313.002.patch, HDFS-14313.003.patch, HDFS-14313.004.patch, 
> HDFS-14313.005.patch, HDFS-14313.006.patch, HDFS-14313.007.patch, 
> HDFS-14313.008.patch, HDFS-14313.009.patch, HDFS-14313.010.patch, 
> HDFS-14313.011.patch
>
>
> There are two ways of DU/DF getting used space that are insufficient.
>  #  Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike.
>  #  Running DF is inaccurate when the disk sharing by multiple datanode or 
> other servers.
>  Getting hdfs used space from  FsDatasetImpl#volumeMap#ReplicaInfos in memory 
> is very small and accurate. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14704) RBF:NnId should not be null in NamenodeHeartbeatService

2019-08-05 Thread xuzq (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuzq updated HDFS-14704:

Attachment: HDFS-14704-trunk-001.patch

> RBF:NnId should not be null in NamenodeHeartbeatService
> ---
>
> Key: HDFS-14704
> URL: https://issues.apache.org/jira/browse/HDFS-14704
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: xuzq
>Priority: Major
> Attachments: HDFS-14704-trunk-001.patch
>
>
> NnId should not be null in NamenodeHeartbeatService.
> If NnId is null, it will also print the error message like:
> {code:java}
> 2019-08-06 10:38:07,455 ERROR router.NamenodeHeartbeatService 
> (NamenodeHeartbeatService.java:updateState(229)) - Unhandled exception 
> updating NN registration for ns1:null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos$NamenodeMembershipRecordProto$Builder.setServiceAddress(HdfsServerFederationProtos.java:3831)
> at 
> org.apache.hadoop.hdfs.server.federation.store.records.impl.pb.MembershipStatePBImpl.setServiceAddress(MembershipStatePBImpl.java:119)
> at 
> org.apache.hadoop.hdfs.server.federation.store.records.MembershipState.newInstance(MembershipState.java:108)
> at 
> org.apache.hadoop.hdfs.server.federation.resolver.MembershipNamenodeResolver.registerNamenode(MembershipNamenodeResolver.java:267)
> at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.updateState(NamenodeHeartbeatService.java:223)
> at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.periodicInvoke(NamenodeHeartbeatService.java:159)
> at 
> org.apache.hadoop.hdfs.server.federation.router.PeriodicService$1.run(PeriodicService.java:178)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2019-08-05 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-13596:
---
Attachment: HDFS-13596.009.patch

> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Fei Hui
>Priority: Blocker
> Attachments: HDFS-13596.001.patch, HDFS-13596.002.patch, 
> HDFS-13596.003.patch, HDFS-13596.004.patch, HDFS-13596.005.patch, 
> HDFS-13596.006.patch, HDFS-13596.007.patch, HDFS-13596.008.patch, 
> HDFS-13596.009.patch
>
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
> 2018-05-17 19:10:06,522 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: java.lang.IllegalStateException: Cannot skip to less 
> than the current value (=16389), where newValue=16388
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resetLastInodeId(FSDirectory.java:1945)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:298)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at 

[jira] [Commented] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2019-08-05 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900558#comment-16900558
 ] 

Fei Hui commented on HDFS-13596:


[~LiJinglun] Thanks for your comments.Good Question! I made a mistake.
Move checkErasureCodingSupported to the right place.
We check it when shouldReplicate is false and ecPolicy is not empty.
{code:java}
if (ecPolicy != null && (!ecPolicy.isReplicationPolicy())) {
  checkErasureCodingSupported("createWithEC");
  if (blockSize < ecPolicy.getCellSize()) {
throw new IOException("Specified block size (" + blockSize
+ ") is less than the cell size (" + ecPolicy.getCellSize()
+") of the erasure coding policy (" + ecPolicy + ").");
  }
{code}
Upload v009.patch


> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Fei Hui
>Priority: Blocker
> Attachments: HDFS-13596.001.patch, HDFS-13596.002.patch, 
> HDFS-13596.003.patch, HDFS-13596.004.patch, HDFS-13596.005.patch, 
> HDFS-13596.006.patch, HDFS-13596.007.patch, HDFS-13596.008.patch
>
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
> 2018-05-17 19:10:06,522 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: java.lang.IllegalStateException: Cannot skip to less 
> than the current value (=16389), where newValue=16388
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resetLastInodeId(FSDirectory.java:1945)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:298)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at 

[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=289396=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289396
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 06/Aug/19 02:55
Start Date: 06/Aug/19 02:55
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-518477067
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 888 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 75 | Maven dependency ordering for branch |
   | +1 | mvninstall | 618 | trunk passed |
   | +1 | compile | 343 | trunk passed |
   | +1 | checkstyle | 64 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 785 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 148 | trunk passed |
   | 0 | spotbugs | 415 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 609 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for patch |
   | +1 | mvninstall | 529 | the patch passed |
   | +1 | compile | 340 | the patch passed |
   | +1 | javac | 340 | the patch passed |
   | +1 | checkstyle | 67 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 609 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 145 | the patch passed |
   | +1 | findbugs | 615 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 352 | hadoop-hdds in the patch passed. |
   | -1 | unit | 269 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 6675 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.request.volume.TestOMVolumeSetQuotaRequest |
   |   | hadoop.ozone.om.request.bucket.TestOMBucketCreateRequest |
   |   | hadoop.ozone.om.TestBucketManagerImpl |
   |   | 
hadoop.ozone.om.request.s3.multipart.TestS3MultipartUploadCommitPartRequest |
   |   | hadoop.ozone.om.TestKeyDeletingService |
   |   | hadoop.ozone.om.response.bucket.TestOMBucketCreateResponse |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeSetOwnerRequest |
   |   | hadoop.ozone.om.request.file.TestOMDirectoryCreateRequest |
   |   | hadoop.ozone.om.response.s3.bucket.TestS3BucketCreateResponse |
   |   | hadoop.ozone.om.request.bucket.TestOMBucketSetPropertyRequest |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeCreateRequest |
   |   | hadoop.ozone.om.request.key.TestOMKeyCommitRequest |
   |   | 
hadoop.ozone.om.request.s3.multipart.TestS3MultipartUploadCompleteRequest |
   |   | hadoop.ozone.om.response.volume.TestOMVolumeSetQuotaResponse |
   |   | hadoop.ozone.om.TestS3BucketManager |
   |   | hadoop.ozone.om.response.bucket.TestOMBucketSetPropertyResponse |
   |   | hadoop.ozone.om.request.file.TestOMFileCreateRequest |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeDeleteRequest |
   |   | hadoop.ozone.om.request.key.TestOMKeyCreateRequest |
   |   | hadoop.ozone.om.request.s3.bucket.TestS3BucketCreateRequest |
   |   | hadoop.ozone.security.TestOzoneDelegationTokenSecretManager |
   |   | hadoop.ozone.om.response.volume.TestOMVolumeSetOwnerResponse |
   |   | hadoop.ozone.om.request.key.TestOMAllocateBlockRequest |
   |   | hadoop.ozone.om.request.s3.multipart.TestS3MultipartUploadAbortRequest 
|
   |   | hadoop.ozone.om.response.volume.TestOMVolumeCreateResponse |
   |   | 
hadoop.ozone.om.request.s3.multipart.TestS3InitiateMultipartUploadRequest |
   |   | hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b77b87edf84e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | 

[jira] [Created] (HDFS-14704) RBF:NnId should not be null in NamenodeHeartbeatService

2019-08-05 Thread xuzq (JIRA)
xuzq created HDFS-14704:
---

 Summary: RBF:NnId should not be null in NamenodeHeartbeatService
 Key: HDFS-14704
 URL: https://issues.apache.org/jira/browse/HDFS-14704
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: rbf
Reporter: xuzq


NnId should not be null in NamenodeHeartbeatService.

If NnId is null, it will also print the error message like:
{code:java}
2019-08-06 10:38:07,455 ERROR router.NamenodeHeartbeatService 
(NamenodeHeartbeatService.java:updateState(229)) - Unhandled exception updating 
NN registration for ns1:null
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos$NamenodeMembershipRecordProto$Builder.setServiceAddress(HdfsServerFederationProtos.java:3831)
at 
org.apache.hadoop.hdfs.server.federation.store.records.impl.pb.MembershipStatePBImpl.setServiceAddress(MembershipStatePBImpl.java:119)
at 
org.apache.hadoop.hdfs.server.federation.store.records.MembershipState.newInstance(MembershipState.java:108)
at 
org.apache.hadoop.hdfs.server.federation.resolver.MembershipNamenodeResolver.registerNamenode(MembershipNamenodeResolver.java:267)
at 
org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.updateState(NamenodeHeartbeatService.java:223)
at 
org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.periodicInvoke(NamenodeHeartbeatService.java:159)
at 
org.apache.hadoop.hdfs.server.federation.router.PeriodicService$1.run(PeriodicService.java:178)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14313) Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory instead of df/du

2019-08-05 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900549#comment-16900549
 ] 

Wei-Chiu Chuang commented on HDFS-14313:


Hi [~leosun08] thanks a lot for continued work here, and [~linyiqun] for 
continued review while I dropped out.

I think v11 patch is good. [~linyiqun] if you are ok feel free to commit.

 

One thing that's missing out in the v11 is the update to hdfs-default.xml which 
was missing since v8.

Additionally there should be an additional config key 
"fs.getspaceused.classname" in core-default.xml, and state that possible 
options are
 # org.apache.hadoop.fs.DU (default)
 # org.apache.hadoop.fs.WindowsGetSpaceUsed
 # org.apache.hadoop.fs.DFCachingGetSpaceUsed
 # 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed

 

> Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory  
> instead of df/du
> 
>
> Key: HDFS-14313
> URL: https://issues.apache.org/jira/browse/HDFS-14313
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, performance
>Affects Versions: 2.6.0, 2.7.0, 2.8.0, 2.9.0, 3.0.0, 3.1.0
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14313.000.patch, HDFS-14313.001.patch, 
> HDFS-14313.002.patch, HDFS-14313.003.patch, HDFS-14313.004.patch, 
> HDFS-14313.005.patch, HDFS-14313.006.patch, HDFS-14313.007.patch, 
> HDFS-14313.008.patch, HDFS-14313.009.patch, HDFS-14313.010.patch, 
> HDFS-14313.011.patch
>
>
> There are two ways of DU/DF getting used space that are insufficient.
>  #  Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike.
>  #  Running DF is inaccurate when the disk sharing by multiple datanode or 
> other servers.
>  Getting hdfs used space from  FsDatasetImpl#volumeMap#ReplicaInfos in memory 
> is very small and accurate. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1913) Fix OzoneBucket and RpcClient APIS for acl

2019-08-05 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1913:


 Summary: Fix OzoneBucket and RpcClient APIS for acl
 Key: HDDS-1913
 URL: https://issues.apache.org/jira/browse/HDDS-1913
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


Fix addAcl,removeAcl in OzoneBucket to use newly added acl API's 
addAcl/removeAcl as part of HDDS-1739.

Remove addBucketAcls, removeBucketAcls from RpcClient. We should use 
addAcl/removeAcl.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=289387=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289387
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 06/Aug/19 02:25
Start Date: 06/Aug/19 02:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-518471344
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 160 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 100 | Maven dependency ordering for branch |
   | +1 | mvninstall | 817 | trunk passed |
   | +1 | compile | 460 | trunk passed |
   | +1 | checkstyle | 93 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1105 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 219 | trunk passed |
   | 0 | spotbugs | 514 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 755 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 41 | Maven dependency ordering for patch |
   | +1 | mvninstall | 692 | the patch passed |
   | +1 | compile | 446 | the patch passed |
   | +1 | javac | 446 | the patch passed |
   | +1 | checkstyle | 99 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 861 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 216 | the patch passed |
   | +1 | findbugs | 844 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 373 | hadoop-hdds in the patch passed. |
   | -1 | unit | 251 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 7785 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse |
   |   | hadoop.ozone.om.response.bucket.TestOMBucketSetPropertyResponse |
   |   | hadoop.ozone.om.response.volume.TestOMVolumeSetOwnerResponse |
   |   | hadoop.ozone.om.request.key.TestOMAllocateBlockRequest |
   |   | hadoop.ozone.om.request.file.TestOMFileCreateRequest |
   |   | 
hadoop.ozone.om.request.s3.multipart.TestS3MultipartUploadCompleteRequest |
   |   | hadoop.ozone.om.request.key.TestOMKeyCommitRequest |
   |   | hadoop.ozone.om.request.s3.multipart.TestS3MultipartUploadAbortRequest 
|
   |   | hadoop.ozone.om.request.volume.TestOMVolumeSetOwnerRequest |
   |   | 
hadoop.ozone.om.request.s3.multipart.TestS3MultipartUploadCommitPartRequest |
   |   | hadoop.ozone.om.TestKeyDeletingService |
   |   | hadoop.ozone.om.request.bucket.TestOMBucketSetPropertyRequest |
   |   | hadoop.ozone.om.request.bucket.TestOMBucketCreateRequest |
   |   | hadoop.ozone.om.response.volume.TestOMVolumeSetQuotaResponse |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeCreateRequest |
   |   | 
hadoop.ozone.om.request.s3.multipart.TestS3InitiateMultipartUploadRequest |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeDeleteRequest |
   |   | hadoop.ozone.om.response.bucket.TestOMBucketCreateResponse |
   |   | hadoop.ozone.om.response.volume.TestOMVolumeCreateResponse |
   |   | hadoop.ozone.om.response.s3.bucket.TestS3BucketCreateResponse |
   |   | hadoop.ozone.om.TestBucketManagerImpl |
   |   | hadoop.ozone.om.request.s3.bucket.TestS3BucketCreateRequest |
   |   | hadoop.ozone.om.request.file.TestOMDirectoryCreateRequest |
   |   | hadoop.ozone.om.TestS3BucketManager |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeSetQuotaRequest |
   |   | hadoop.ozone.om.request.key.TestOMKeyCreateRequest |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 7f965752d091 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | 

[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=289377=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289377
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 06/Aug/19 02:01
Start Date: 06/Aug/19 02:01
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-518466282
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 73 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 622 | trunk passed |
   | +1 | compile | 368 | trunk passed |
   | +1 | checkstyle | 73 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 952 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 180 | trunk passed |
   | 0 | spotbugs | 472 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 707 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 601 | the patch passed |
   | +1 | compile | 395 | the patch passed |
   | +1 | javac | 395 | the patch passed |
   | +1 | checkstyle | 80 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 758 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | the patch passed |
   | +1 | findbugs | 702 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 401 | hadoop-hdds in the patch failed. |
   | -1 | unit | 328 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 61 | The patch does not generate ASF License warnings. |
   | | | 6653 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   |   | hadoop.ozone.om.request.s3.bucket.TestS3BucketCreateRequest |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 1306acdb7962 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d6697da |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/9/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/9/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/9/testReport/ |
   | Max. process+thread count | 1238 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/9/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289377)
Time Spent: 4h  (was: 3h 50m)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> This Jira is to use new HA code of OM in Non-HA code path.



--
This message was 

[jira] [Commented] (HDFS-14204) Backport HDFS-12943 to branch-2

2019-08-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900529#comment-16900529
 ] 

Hadoop QA commented on HDFS-14204:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
0s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 38 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
5s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
17s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
34s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
21s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 5s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
42s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
49s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
39s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
29s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m 
33s{color} | {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  2m 33s{color} | 
{color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 33s{color} 
| {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
10s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 13m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 10s{color} 
| {color:red} root-jdk1.8.0_222 with JDK v1.8.0_222 generated 1 new + 1345 
unchanged - 1 fixed = 1346 total (was 1346) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 55s{color} | {color:orange} root: The patch generated 76 new + 3191 
unchanged - 20 fixed = 3267 total (was 3211) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
52s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
22s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
35s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 30s{color} 
| {color:red} hadoop-hdfs in the 

[jira] [Commented] (HDFS-14703) NameNode Fine-Grained Locking via Metadata Partitioning

2019-08-05 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900526#comment-16900526
 ] 

Konstantin Shvachko commented on HDFS-14703:


Attached the design document for review.

> NameNode Fine-Grained Locking via Metadata Partitioning
> ---
>
> Key: HDFS-14703
> URL: https://issues.apache.org/jira/browse/HDFS-14703
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Konstantin Shvachko
>Priority: Major
> Attachments: NameNode Fine-Grained Locking.pdf
>
>
> We target to enable fine-grained locking by splitting the in-memory namespace 
> into multiple partitions each having a separate lock. Intended to improve 
> performance of NameNode write operations.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8708) DFSClient should ignore dfs.client.retry.policy.enabled for HA proxies

2019-08-05 Thread Chengbing Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-8708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900527#comment-16900527
 ] 

Chengbing Liu commented on HDFS-8708:
-

[~ayushtkn] [~shv] Could you please review the change if you have time? Thanks!

> DFSClient should ignore dfs.client.retry.policy.enabled for HA proxies
> --
>
> Key: HDFS-8708
> URL: https://issues.apache.org/jira/browse/HDFS-8708
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Jitendra Nath Pandey
>Assignee: Chengbing Liu
>Priority: Critical
> Attachments: HDFS-8708.001.patch, HDFS-8708.002.patch
>
>
> DFSClient should ignore dfs.client.retry.policy.enabled for HA proxies to 
> ensure fast failover. Otherwise, dfsclient retries the NN which is no longer 
> active and delays the failover.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1912) start-ozone.sh fail due to ozone-config.sh not found

2019-08-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900525#comment-16900525
 ] 

Hadoop QA commented on HDDS-1912:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
0s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m  
0s{color} | {color:red} The patch generated 1 new + 3 unchanged - 0 fixed = 4 
total (was 3) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
41s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
50s{color} | {color:green} hadoop-ozone in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2762/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1912 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976763/HDDS-1912.001.patch |
| Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs |
| uname | Linux e15378eaeef9 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / d6697da |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2762/artifact/out/diff-patch-shellcheck.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2762/testReport/ |
| Max. process+thread count | 306 (vs. ulimit of 5500) |
| modules | C: hadoop-ozone/common U: hadoop-ozone/common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2762/console |
| versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> start-ozone.sh fail due to ozone-config.sh not found 
> -
>
> Key: HDDS-1912
> URL: https://issues.apache.org/jira/browse/HDDS-1912
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI
>Affects Versions: 0.5.0
>Reporter: kevin su
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1912.001.patch
>
>
> I want to run Ozone individually,but it will always find start-ozone.sh in 
> the *$HAOOP_HOME*/libexec firstly

[jira] [Work logged] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?focusedWorklogId=289369=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289369
 ]

ASF GitHub Bot logged work on HDDS-1884:


Author: ASF GitHub Bot
Created on: 06/Aug/19 01:39
Start Date: 06/Aug/19 01:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1202: HDDS-1884. 
Support Bucket ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#issuecomment-518461706
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 550 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for branch |
   | +1 | mvninstall | 587 | trunk passed |
   | +1 | compile | 375 | trunk passed |
   | +1 | checkstyle | 73 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 849 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 143 | trunk passed |
   | 0 | spotbugs | 416 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 610 | trunk passed |
   | -0 | patch | 451 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | +1 | mvninstall | 531 | the patch passed |
   | +1 | compile | 358 | the patch passed |
   | +1 | cc | 358 | the patch passed |
   | +1 | javac | 358 | the patch passed |
   | +1 | checkstyle | 65 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 657 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | the patch passed |
   | +1 | findbugs | 623 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 298 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2759 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 8875 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestContainerStateMachineIdempotency |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1202 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux dff15a84f10d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d6697da |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/12/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/12/testReport/ |
   | Max. process+thread count | 4230 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/12/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries 

[jira] [Updated] (HDFS-14703) NameNode Fine-Grained Locking via Metadata Partitioning

2019-08-05 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-14703:
---
Attachment: NameNode Fine-Grained Locking.pdf

> NameNode Fine-Grained Locking via Metadata Partitioning
> ---
>
> Key: HDFS-14703
> URL: https://issues.apache.org/jira/browse/HDFS-14703
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Konstantin Shvachko
>Priority: Major
> Attachments: NameNode Fine-Grained Locking.pdf
>
>
> We target to enable fine-grained locking by splitting the in-memory namespace 
> into multiple partitions each having a separate lock. Intended to improve 
> performance of NameNode write operations.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14313) Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory instead of df/du

2019-08-05 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900515#comment-16900515
 ] 

Lisheng Sun edited comment on HDFS-14313 at 8/6/19 1:37 AM:


Thanx  [~linyiqun] for your deep review. I updated this patch as your 
comments. 

checkstyle issue:
{code:java}
./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSCachingGetSpaceUsed.java:64:
public Builder setBpid(String bpid) {:35: 'bpid' hides a field. 
[HiddenField]
{code}
I think it is not a problem.

I haved uploaded the v11 patch. Could you help review it?  Thank you. 


was (Author: leosun08):
Thanx  [~linyiqun] for your deep review. I updated this patch as your 
comments. 

checkstyle issue:
{code:java}
./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSCachingGetSpaceUsed.java:64:
public Builder setBpid(String bpid) {:35: 'bpid' hides a field. 
[HiddenField]
{code}
I think it is not a problem. I haved uploaded the v11 patch. Could you help 
review it?  Thank you. 

> Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory  
> instead of df/du
> 
>
> Key: HDFS-14313
> URL: https://issues.apache.org/jira/browse/HDFS-14313
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, performance
>Affects Versions: 2.6.0, 2.7.0, 2.8.0, 2.9.0, 3.0.0, 3.1.0
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14313.000.patch, HDFS-14313.001.patch, 
> HDFS-14313.002.patch, HDFS-14313.003.patch, HDFS-14313.004.patch, 
> HDFS-14313.005.patch, HDFS-14313.006.patch, HDFS-14313.007.patch, 
> HDFS-14313.008.patch, HDFS-14313.009.patch, HDFS-14313.010.patch, 
> HDFS-14313.011.patch
>
>
> There are two ways of DU/DF getting used space that are insufficient.
>  #  Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike.
>  #  Running DF is inaccurate when the disk sharing by multiple datanode or 
> other servers.
>  Getting hdfs used space from  FsDatasetImpl#volumeMap#ReplicaInfos in memory 
> is very small and accurate. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14703) NameNode Fine-Grained Locking via Metadata Partitioning

2019-08-05 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-14703:
--

 Summary: NameNode Fine-Grained Locking via Metadata Partitioning
 Key: HDFS-14703
 URL: https://issues.apache.org/jira/browse/HDFS-14703
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs, namenode
Reporter: Konstantin Shvachko


We target to enable fine-grained locking by splitting the in-memory namespace 
into multiple partitions each having a separate lock. Intended to improve 
performance of NameNode write operations.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12738) Namenode logs many "SSL renegotiate denied" warnings after enable HTTPS for HDFS

2019-08-05 Thread haohua li (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900517#comment-16900517
 ] 

haohua li commented on HDFS-12738:
--

hi there,

is there any progress on this issue?

> Namenode logs many "SSL renegotiate denied" warnings after enable HTTPS for 
> HDFS
> 
>
> Key: HDFS-12738
> URL: https://issues.apache.org/jira/browse/HDFS-12738
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Bharat Viswanadham
>Priority: Major
>
> After enable HTTPS(SSL) for HDFS, the namenode log prints lots of SSL 
> renegotiate denied warnings. Not sure if this is caused by a similar reason 
> like YARN-6797. This ticket is opened to investigate and fix this. 
>  
> {code}
> 2017-10-24 07:55:41,083 WARN  mortbay.log (Slf4jLog.java:warn(76)) - SSL 
> renegotiate denied: java.nio.channels.SocketChannel[connected 
> local=/192.168.64.101:50470 remote=/192.168.64.101:48365]
> 2017-10-24 07:55:50,075 WARN  mortbay.log (Slf4jLog.java:warn(76)) - SSL 
> renegotiate denied: java.nio.channels.SocketChannel[connected 
> local=/192.168.64.101:50470 remote=/192.168.64.101:48373]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-08-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900516#comment-16900516
 ] 

Hadoop QA commented on HDFS-2470:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 776 unchanged - 5 fixed = 776 total (was 781) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}155m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-2470 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976757/HDFS-2470.05.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 6e9e9540604c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d6697da |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27410/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27410/testReport/ |
| Max. process+thread count | 4557 (vs. ulimit of 1) |
| 

[jira] [Work logged] (HDDS-1884) Support Bucket ACL operations for OM HA.

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1884?focusedWorklogId=289361=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289361
 ]

ASF GitHub Bot logged work on HDDS-1884:


Author: ASF GitHub Bot
Created on: 06/Aug/19 01:21
Start Date: 06/Aug/19 01:21
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1202: HDDS-1884. 
Support Bucket ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#issuecomment-518458595
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 48 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 33 | Maven dependency ordering for branch |
   | +1 | mvninstall | 638 | trunk passed |
   | +1 | compile | 376 | trunk passed |
   | +1 | checkstyle | 65 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 857 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 157 | trunk passed |
   | 0 | spotbugs | 450 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 664 | trunk passed |
   | -0 | patch | 491 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | +1 | mvninstall | 589 | the patch passed |
   | +1 | compile | 373 | the patch passed |
   | +1 | cc | 373 | the patch passed |
   | +1 | javac | 373 | the patch passed |
   | +1 | checkstyle | 79 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 673 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 170 | the patch passed |
   | +1 | findbugs | 633 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 306 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1929 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 7842 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1202 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 1183f2ee4c10 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d6697da |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/11/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/11/testReport/ |
   | Max. process+thread count | 3605 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/11/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289361)
Time Spent: 4h  (was: 3h 50m)

> Support Bucket ACL 

[jira] [Commented] (HDFS-14313) Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory instead of df/du

2019-08-05 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900515#comment-16900515
 ] 

Lisheng Sun commented on HDFS-14313:


Thanx  [~linyiqun] for your deep review. I updated this patch as your 
comments. 

checkstyle issue:
{code:java}
./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSCachingGetSpaceUsed.java:64:
public Builder setBpid(String bpid) {:35: 'bpid' hides a field. 
[HiddenField]
{code}
I think it is not a problem. I haved uploaded the v11 patch. Could you help 
review it?  Thank you. 

> Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory  
> instead of df/du
> 
>
> Key: HDFS-14313
> URL: https://issues.apache.org/jira/browse/HDFS-14313
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, performance
>Affects Versions: 2.6.0, 2.7.0, 2.8.0, 2.9.0, 3.0.0, 3.1.0
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14313.000.patch, HDFS-14313.001.patch, 
> HDFS-14313.002.patch, HDFS-14313.003.patch, HDFS-14313.004.patch, 
> HDFS-14313.005.patch, HDFS-14313.006.patch, HDFS-14313.007.patch, 
> HDFS-14313.008.patch, HDFS-14313.009.patch, HDFS-14313.010.patch, 
> HDFS-14313.011.patch
>
>
> There are two ways of DU/DF getting used space that are insufficient.
>  #  Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike.
>  #  Running DF is inaccurate when the disk sharing by multiple datanode or 
> other servers.
>  Getting hdfs used space from  FsDatasetImpl#volumeMap#ReplicaInfos in memory 
> is very small and accurate. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1895?focusedWorklogId=289358=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289358
 ]

ASF GitHub Bot logged work on HDDS-1895:


Author: ASF GitHub Bot
Created on: 06/Aug/19 01:16
Start Date: 06/Aug/19 01:16
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1230: HDDS-1895. 
Support Key ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1230#issuecomment-518457618
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for branch |
   | +1 | mvninstall | 584 | trunk passed |
   | +1 | compile | 373 | trunk passed |
   | +1 | checkstyle | 77 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 873 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | trunk passed |
   | 0 | spotbugs | 413 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 606 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 559 | the patch passed |
   | +1 | compile | 389 | the patch passed |
   | +1 | cc | 389 | the patch passed |
   | +1 | javac | 389 | the patch passed |
   | -0 | checkstyle | 50 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 680 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | the patch passed |
   | +1 | findbugs | 633 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 282 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1972 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 7739 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1230/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1230 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux c2829efda7ee 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d6697da |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1230/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1230/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1230/1/testReport/ |
   | Max. process+thread count | 4999 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1230/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289358)
Time Spent: 20m  (was: 10m)

> 

[jira] [Updated] (HDDS-1912) start-ozone.sh fail due to ozone-config.sh not found

2019-08-05 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDDS-1912:
---
Attachment: HDDS-1912.001.patch
Status: Patch Available  (was: Open)

> start-ozone.sh fail due to ozone-config.sh not found 
> -
>
> Key: HDDS-1912
> URL: https://issues.apache.org/jira/browse/HDDS-1912
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI
>Affects Versions: 0.5.0
>Reporter: kevin su
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1912.001.patch
>
>
> I want to run Ozone individually,but it will always find start-ozone.sh in 
> the *$HAOOP_HOME*/libexec firstly
> If file not found, it will fail
> We should find this file in the both *$HADOOP_HOME* and *$OZONE_HOME*/libexec



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1912) start-ozone.sh fail due to ozone-config.sh not found

2019-08-05 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDDS-1912:
---
Description: 
I want to run Ozone individually,but it will always find start-ozone.sh in the 
*$HAOOP_HOME*/libexec firstly

If file not found, it will fail

We should find this file in the both *$HADOOP_HOME* and *$OZONE_HOME*/libexec

  was:
I want to run Ozone individually,but it will always find start-ozone.sh in the 
*$HAOOP_HOME/*libexec firstly

If file not found, it will fail

We should find this file in the both *$HADOOP_HOME* and *$OZONE_HOME*/libexec


> start-ozone.sh fail due to ozone-config.sh not found 
> -
>
> Key: HDDS-1912
> URL: https://issues.apache.org/jira/browse/HDDS-1912
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI
>Affects Versions: 0.5.0
>Reporter: kevin su
>Priority: Major
> Fix For: 0.5.0
>
>
> I want to run Ozone individually,but it will always find start-ozone.sh in 
> the *$HAOOP_HOME*/libexec firstly
> If file not found, it will fail
> We should find this file in the both *$HADOOP_HOME* and *$OZONE_HOME*/libexec



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1912) start-ozone.sh fail due to ozone-config.sh not found

2019-08-05 Thread kevin su (JIRA)
kevin su created HDDS-1912:
--

 Summary: start-ozone.sh fail due to ozone-config.sh not found 
 Key: HDDS-1912
 URL: https://issues.apache.org/jira/browse/HDDS-1912
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone CLI
Affects Versions: 0.5.0
Reporter: kevin su
 Fix For: 0.5.0


I want to run Ozone individually,but it will always find start-ozone.sh in the 
*$HAOOP_HOME/*libexec firstly

If file not found, it will fail

We should find this file in the both *$HADOOP_HOME* and *$OZONE_HOME*/libexec



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=289282=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289282
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 06/Aug/19 00:11
Start Date: 06/Aug/19 00:11
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-518042798
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289282)
Time Spent: 3h 20m  (was: 3h 10m)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> This Jira is to use new HA code of OM in Non-HA code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=289284=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289284
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 06/Aug/19 00:11
Start Date: 06/Aug/19 00:11
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-518311451
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289284)
Time Spent: 3h 40m  (was: 3.5h)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> This Jira is to use new HA code of OM in Non-HA code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=289285=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289285
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 06/Aug/19 00:11
Start Date: 06/Aug/19 00:11
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-518446332
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289285)
Time Spent: 3h 50m  (was: 3h 40m)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> This Jira is to use new HA code of OM in Non-HA code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=289283=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289283
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 06/Aug/19 00:11
Start Date: 06/Aug/19 00:11
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-518050678
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289283)
Time Spent: 3.5h  (was: 3h 20m)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> This Jira is to use new HA code of OM in Non-HA code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=289276=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289276
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 06/Aug/19 00:10
Start Date: 06/Aug/19 00:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-518048715
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 599 | trunk passed |
   | +1 | compile | 366 | trunk passed |
   | +1 | checkstyle | 70 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 847 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   | 0 | spotbugs | 420 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 612 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 543 | the patch passed |
   | +1 | compile | 372 | the patch passed |
   | +1 | javac | 372 | the patch passed |
   | -0 | checkstyle | 42 | hadoop-ozone: The patch generated 22 new + 0 
unchanged - 0 fixed = 22 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 683 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 198 | the patch passed |
   | +1 | findbugs | 631 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 291 | hadoop-hdds in the patch passed. |
   | -1 | unit | 168 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 5879 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.request.s3.bucket.TestS3BucketCreateRequest |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 034edb50e155 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 065cbc6 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/5/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/5/testReport/ |
   | Max. process+thread count | 1321 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289276)
Time Spent: 2h 20m  (was: 2h 10m)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> This Jira is to use new HA code of OM in 

[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=289281=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289281
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 06/Aug/19 00:10
Start Date: 06/Aug/19 00:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-518357010
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 89 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 656 | trunk passed |
   | +1 | compile | 391 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1077 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 219 | trunk passed |
   | 0 | spotbugs | 530 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 798 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 672 | the patch passed |
   | +1 | compile | 446 | the patch passed |
   | +1 | javac | 446 | the patch passed |
   | +1 | checkstyle | 106 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 879 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 211 | the patch passed |
   | +1 | findbugs | 812 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 408 | hadoop-hdds in the patch passed. |
   | -1 | unit | 301 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 57 | The patch does not generate ASF License warnings. |
   | | | 7408 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.request.s3.bucket.TestS3BucketCreateRequest |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 67e557db9c30 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 71aad60 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/7/testReport/ |
   | Max. process+thread count | 1303 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289281)
Time Spent: 3h 10m  (was: 3h)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> This Jira is to use new HA code of OM in Non-HA code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: 

[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=289279=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289279
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 06/Aug/19 00:10
Start Date: 06/Aug/19 00:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-518348571
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 46 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 599 | trunk passed |
   | +1 | compile | 364 | trunk passed |
   | +1 | checkstyle | 65 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 827 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | trunk passed |
   | 0 | spotbugs | 483 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 708 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 588 | the patch passed |
   | +1 | compile | 394 | the patch passed |
   | +1 | javac | 394 | the patch passed |
   | +1 | checkstyle | 68 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 680 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | the patch passed |
   | +1 | findbugs | 703 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 295 | hadoop-hdds in the patch passed. |
   | -1 | unit | 190 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 6050 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.request.s3.bucket.TestS3BucketCreateRequest |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 73c29aed3e0b 4.4.0-157-generic #185-Ubuntu SMP Tue Jul 23 
09:17:01 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 71aad60 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/6/testReport/ |
   | Max. process+thread count | 1326 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289279)
Time Spent: 2h 50m  (was: 2h 40m)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> This Jira is to use new HA code of OM in Non-HA code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: 

[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=289278=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289278
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 06/Aug/19 00:10
Start Date: 06/Aug/19 00:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-518049935
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 81 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 745 | trunk passed |
   | +1 | compile | 418 | trunk passed |
   | +1 | checkstyle | 77 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 851 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 184 | trunk passed |
   | 0 | spotbugs | 521 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 756 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 664 | the patch passed |
   | +1 | compile | 445 | the patch passed |
   | +1 | javac | 445 | the patch passed |
   | -0 | checkstyle | 45 | hadoop-ozone: The patch generated 22 new + 0 
unchanged - 0 fixed = 22 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 771 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 184 | the patch passed |
   | +1 | findbugs | 797 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 376 | hadoop-hdds in the patch passed. |
   | -1 | unit | 320 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 6929 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.request.s3.bucket.TestS3BucketCreateRequest |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux bca472d92b43 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 065cbc6 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/4/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/4/testReport/ |
   | Max. process+thread count | 1392 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289278)
Time Spent: 2h 40m  (was: 2.5h)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> This Jira is to use new HA code of OM in 

[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=289277=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289277
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 06/Aug/19 00:10
Start Date: 06/Aug/19 00:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-518049554
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 83 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 697 | trunk passed |
   | +1 | compile | 412 | trunk passed |
   | +1 | checkstyle | 80 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 967 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 177 | trunk passed |
   | 0 | spotbugs | 519 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 799 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 631 | the patch passed |
   | +1 | compile | 412 | the patch passed |
   | +1 | javac | 412 | the patch passed |
   | -0 | checkstyle | 44 | hadoop-ozone: The patch generated 22 new + 0 
unchanged - 0 fixed = 22 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 660 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 205 | the patch passed |
   | +1 | findbugs | 774 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 249 | hadoop-hdds in the patch failed. |
   | -1 | unit | 301 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 6717 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.om.request.s3.bucket.TestS3BucketCreateRequest |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux de11ea326691 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 065cbc6 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/3/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/3/testReport/ |
   | Max. process+thread count | 1384 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289277)
Time Spent: 2.5h  (was: 2h 20m)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham

[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=289275=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289275
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 06/Aug/19 00:10
Start Date: 06/Aug/19 00:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-518048288
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 596 | trunk passed |
   | +1 | compile | 369 | trunk passed |
   | +1 | checkstyle | 75 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 878 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   | 0 | spotbugs | 421 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 614 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 549 | the patch passed |
   | +1 | compile | 380 | the patch passed |
   | +1 | javac | 380 | the patch passed |
   | -0 | checkstyle | 44 | hadoop-ozone: The patch generated 22 new + 0 
unchanged - 0 fixed = 22 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | shadedclient | 678 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 166 | the patch passed |
   | +1 | findbugs | 649 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 304 | hadoop-hdds in the patch failed. |
   | -1 | unit | 165 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 5922 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   |   | hadoop.ozone.om.request.s3.bucket.TestS3BucketCreateRequest |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a494df207080 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 065cbc6 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/2/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/2/testReport/ |
   | Max. process+thread count | 1412 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289275)
Time Spent: 2h 10m  (was: 2h)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>

[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=289280=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289280
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 06/Aug/19 00:10
Start Date: 06/Aug/19 00:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-518352807
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 47 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 619 | trunk passed |
   | +1 | compile | 352 | trunk passed |
   | +1 | checkstyle | 67 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 826 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 152 | trunk passed |
   | 0 | spotbugs | 473 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 687 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 599 | the patch passed |
   | +1 | compile | 378 | the patch passed |
   | +1 | javac | 378 | the patch passed |
   | +1 | checkstyle | 72 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 654 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | the patch passed |
   | +1 | findbugs | 680 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 304 | hadoop-hdds in the patch passed. |
   | -1 | unit | 187 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 5992 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.request.s3.bucket.TestS3BucketCreateRequest |
   |   | hadoop.ozone.security.TestOzoneDelegationTokenSecretManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 04faebb4bd36 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 71aad60 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/8/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/8/testReport/ |
   | Max. process+thread count | 1344 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289280)
Time Spent: 3h  (was: 2h 50m)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> This Jira is to use new HA code of OM in Non-HA code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HDFS-14687) Standby Namenode never come out of safemode when EC files are being written.

2019-08-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900494#comment-16900494
 ] 

Hadoop QA commented on HDFS-14687:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 10 unchanged - 0 fixed = 12 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}167m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14687 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976751/HDFS-14687.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 212b42ae013d 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d6697da |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27409/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| whitespace | 

[jira] [Updated] (HDDS-1911) Support Prefix ACL operations for OM HA.

2019-08-05 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1911:
-
Description: +-HDDS-1608-+ adds 4 new api for Ozone rpc client. OM HA 
implementation needs to handle them.  (was: +HDDS-1541+ adds 4 new api for 
Ozone rpc client. OM HA implementation needs to handle them.)

> Support Prefix ACL operations for OM HA.
> 
>
> Key: HDDS-1911
> URL: https://issues.apache.org/jira/browse/HDDS-1911
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> +-HDDS-1608-+ adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1911) Support Prefix ACL operations for OM HA.

2019-08-05 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1911:
-
Labels:   (was: pull-request-available)

> Support Prefix ACL operations for OM HA.
> 
>
> Key: HDDS-1911
> URL: https://issues.apache.org/jira/browse/HDDS-1911
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> +-HDDS-1608-+ adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1911) Support Prefix ACL operations for OM HA.

2019-08-05 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1911:


 Summary: Support Prefix ACL operations for OM HA.
 Key: HDDS-1911
 URL: https://issues.apache.org/jira/browse/HDDS-1911
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


+HDDS-1541+ adds 4 new api for Ozone rpc client. OM HA implementation needs to 
handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-05 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1895:
-
Status: Patch Available  (was: In Progress)

> Support Key ACL operations for OM HA.
> -
>
> Key: HDDS-1895
> URL: https://issues.apache.org/jira/browse/HDDS-1895
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> +HDDS-1541+ adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1488) Scm cli command to start/stop replication manager

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1488?focusedWorklogId=289253=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289253
 ]

ASF GitHub Bot logged work on HDDS-1488:


Author: ASF GitHub Bot
Created on: 05/Aug/19 23:15
Start Date: 05/Aug/19 23:15
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1221: HDDS-1488. Scm 
cli command to start/stop replication manager.
URL: https://github.com/apache/hadoop/pull/1221#issuecomment-518435422
 
 
   One suggestion, regarding output message for the commands.
   Rest LGTM.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289253)
Time Spent: 2h 20m  (was: 2h 10m)

> Scm cli command to start/stop replication manager
> -
>
> Key: HDDS-1488
> URL: https://issues.apache.org/jira/browse/HDDS-1488
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> It would be nice to have scmcli command to start/stop the ReplicationManager 
> thread running in SCM



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1488) Scm cli command to start/stop replication manager

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1488?focusedWorklogId=289252=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289252
 ]

ASF GitHub Bot logged work on HDDS-1488:


Author: ASF GitHub Bot
Created on: 05/Aug/19 23:15
Start Date: 05/Aug/19 23:15
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1221: 
HDDS-1488. Scm cli command to start/stop replication manager.
URL: https://github.com/apache/hadoop/pull/1221#discussion_r310827280
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
 ##
 @@ -469,6 +469,27 @@ public boolean forceExitSafeMode() throws IOException {
 return scm.exitSafeMode();
   }
 
+  @Override
+  public void startReplicationManager() {
+AUDIT.logWriteSuccess(buildAuditMessageForSuccess(
+SCMAction.START_REPLICATION_MANAGER, null));
+scm.getReplicationManager().start();
+  }
+
+  @Override
+  public void stopReplicationManager() {
+AUDIT.logWriteSuccess(buildAuditMessageForSuccess(
+SCMAction.STOP_REPLICATION_MANAGER, null));
+scm.getReplicationManager().stop();
 
 Review comment:
   I just see it is mentioned Stopping ReplicationManager... With this info, 
the user will not understand this is async behavior and immediately calling 
getStatus he might still see it is running. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289252)
Time Spent: 2h 10m  (was: 2h)

> Scm cli command to start/stop replication manager
> -
>
> Key: HDDS-1488
> URL: https://issues.apache.org/jira/browse/HDDS-1488
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> It would be nice to have scmcli command to start/stop the ReplicationManager 
> thread running in SCM



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14702) Datanode.ReplicaMap memory leak

2019-08-05 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900467#comment-16900467
 ] 

CR Hota commented on HDFS-14702:


[~hexiaoqiao] Thanks for reporting this.

Can you try to backport HDFS-8859 to 2.7.1 installation you have and let us 
know how the heap dump looks like for the data node.

> Datanode.ReplicaMap memory leak
> ---
>
> Key: HDFS-14702
> URL: https://issues.apache.org/jira/browse/HDFS-14702
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: He Xiaoqiao
>Priority: Major
>
> DataNode memory is occupied by ReplicaMaps and cause GC high frequency then 
> write performance degrade.
> It is about 600K block replicas located at DataNode, but when dump heap, 
> there are over 8M items of ReplicaMaps and footprint over 500MB. It seems 
> that memory leak. One more situation, the block w/r ops is very high.
> Do not test HDFS-8859 and no idea if it can solve this issue.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1895?focusedWorklogId=289249=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289249
 ]

ASF GitHub Bot logged work on HDDS-1895:


Author: ASF GitHub Bot
Created on: 05/Aug/19 23:06
Start Date: 05/Aug/19 23:06
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1230: 
HDDS-1895. Support Key ACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1230
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289249)
Time Spent: 10m
Remaining Estimate: 0h

> Support Key ACL operations for OM HA.
> -
>
> Key: HDDS-1895
> URL: https://issues.apache.org/jira/browse/HDDS-1895
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> +HDDS-1541+ adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1895) Support Key ACL operations for OM HA.

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1895:
-
Labels: pull-request-available  (was: )

> Support Key ACL operations for OM HA.
> -
>
> Key: HDDS-1895
> URL: https://issues.apache.org/jira/browse/HDDS-1895
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> +HDDS-1541+ adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14557) JournalNode error: Can't scan a pre-transactional edit log

2019-08-05 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900459#comment-16900459
 ] 

Wei-Chiu Chuang commented on HDFS-14557:


Looks good to me. Thanks for the explanation, and that was very helpful.
+1

> JournalNode error: Can't scan a pre-transactional edit log
> --
>
> Key: HDFS-14557
> URL: https://issues.apache.org/jira/browse/HDFS-14557
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14557.001.patch, HDFS-14557.002.patch
>
>
> We saw the following error in JournalNodes a few times before.
> {noformat}
> 2016-09-22 12:44:24,505 WARN org.apache.hadoop.hdfs.server.namenode.FSImage: 
> Caught exception after scanning through 0 ops from /data/1/dfs/current/ed
> its_inprogress_0661942 while determining its valid length. 
> Position was 761856
> java.io.IOException: Can't scan a pre-transactional edit log.
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$LegacyReader.scanOp(FSEditLogOp.java:4592)
> at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.scanNextOp(EditLogFileInputStream.java:245)
> at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.scanEditLog(EditLogFileInputStream.java:355)
> at 
> org.apache.hadoop.hdfs.server.namenode.FileJournalManager$EditLogFile.scanLog(FileJournalManager.java:551)
> at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.scanStorageForLatestEdits(Journal.java:193)
> at org.apache.hadoop.hdfs.qjournal.server.Journal.(Journal.java:153)
> at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNode.getOrCreateJournal(JournalNode.java:90)
> {noformat}
> The edit file was corrupt, and one possible culprit of this error is a full 
> disk. The JournalNode can't recovered and must be resync manually from other 
> JournalNodes. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1732) Add guice injection to Recon task framework.

2019-08-05 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan reassigned HDDS-1732:
---

Assignee: Aravindan Vijayan

> Add guice injection to Recon task framework.
> 
>
> Key: HDDS-1732
> URL: https://issues.apache.org/jira/browse/HDDS-1732
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>
> * Currently, the task framework is not bound to the injection module in 
> Recon. To keep it consistent, it is better to move to injection based 
> instantiation of tasks.
> * Clean up unit test in ozone-recon such that duplicated code is removed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-08-05 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900435#comment-16900435
 ] 

Eric Yang commented on HDFS-2470:
-

[~swagle] Thank you for the patch.

1. File API is ridden with misbehavior for serious filesystem work.  For 
creating directory and setting file permission recursively for the newly 
created directories, use 
[Files|https://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html] api 
instead.

  {code}
  import static java.nio.file.attribute.PosixFilePermission.OWNER_READ;
  import static java.nio.file.attribute.PosixFilePermission.OWNER_WRITE;
  
  ...
  
  Set permissions = EnumSet.of(OWNER_READ, OWNER_WRITE);
  Files.createDirectory(Paths.get(curDir), 
PosixFilePermissions.asFileAttribute(permissions));
  {code}

  I am not sure about setting root permission of the working directory.  It 
could be that /tmp/namenode, and accidentally make /tmp read/write only by hdfs 
user and fail.

2. javax.annotation.Nullable is a problematic annotation.  Findbugs uses this 
annotation but it will prevent code from working in JDK 9 to work with signed 
content.  See HADOOP-16463 for detail.  It would be nice to use 
findbugsExcludeFile.xml to define the variable is nullable.

> NN should automatically set permissions on dfs.namenode.*.dir
> -
>
> Key: HDFS-2470
> URL: https://issues.apache.org/jira/browse/HDFS-2470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Aaron T. Myers
>Assignee: Siddharth Wagle
>Priority: Major
> Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, 
> HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch
>
>
> Much as the DN currently sets the correct permissions for the 
> dfs.datanode.data.dir, the NN should do the same for the 
> dfs.namenode.(name|edit).dir.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=289221=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289221
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 05/Aug/19 21:48
Start Date: 05/Aug/19 21:48
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #1146: HDDS-1366. Add 
ability in Recon to track the number of small files in an Ozone Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r310806041
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -0,0 +1,254 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import com.google.inject.Inject;
+import org.apache.commons.lang3.tuple.ImmutablePair;
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+
+/**
+ * Class to iterate over the OM DB and store the counts of existing/new
+ * files binned into ranges (1KB, 10Kb..,10MB,..1PB) to the Recon
+ * fileSize DB.
+ */
+public class FileSizeCountTask extends ReconDBUpdateTask {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(FileSizeCountTask.class);
+
+  private int maxBinSize;
+  private long maxFileSizeUpperBound = 1125899906842624L; // 1 PB
+  private long[] upperBoundCount = new long[maxBinSize];
+  private long ONE_KB = 1024L;
+  private Collection tables = new ArrayList<>();
+  private FileCountBySizeDao fileCountBySizeDao;
+
+  @Inject
+  public FileSizeCountTask(OMMetadataManager omMetadataManager,
+  Configuration sqlConfiguration) {
+super("FileSizeCountTask");
+try {
+  tables.add(omMetadataManager.getKeyTable().getName());
+  fileCountBySizeDao = new FileCountBySizeDao(sqlConfiguration);
+} catch (Exception e) {
+  LOG.error("Unable to fetch Key Table updates ", e);
+}
+  }
+
+  protected long getOneKB() {
+return ONE_KB;
+  }
+
+  protected long getMaxFileSizeUpperBound() {
+return maxFileSizeUpperBound;
+  }
+
+  protected int getMaxBinSize() {
+return maxBinSize;
+  }
+
+  /**
+   * Read the Keys from OM snapshot DB and calculate the upper bound of
+   * File Size it belongs to.
+   *
+   * @param omMetadataManager OM Metadata instance.
+   * @return Pair
+   */
+  @Override
+  public Pair reprocess(OMMetadataManager omMetadataManager) {
+LOG.info("Starting a 'reprocess' run of FileSizeCountTask.");
+
+fetchUpperBoundCount("reprocess");
+
+Table omKeyInfoTable = omMetadataManager.getKeyTable();
+try (TableIterator>
+keyIter = omKeyInfoTable.iterator()) {
+  while (keyIter.hasNext()) {
+Table.KeyValue kv = keyIter.next();
+countFileSize(kv.getValue());
+  }
+} catch (IOException ioEx) {
+  LOG.error("Unable to populate File Size Count in Recon DB. ", ioEx);
+  return new ImmutablePair<>(getTaskName(), false);
+} finally {
+  populateFileCountBySizeDB();
+}
+
+LOG.info("Completed a 'reprocess' run of FileSizeCountTask.");
+return new ImmutablePair<>(getTaskName(), true);
+  }
+
+  void setMaxBinSize() {
+maxBinSize = (int)(long) (Math.log(getMaxFileSizeUpperBound())
+/Math.log(2)) - 10;
+maxBinSize += 2;  // extra bin to add files > 1PB.
+  }
+
+  void fetchUpperBoundCount(String type) {
+setMaxBinSize();
+if (type.equals("process")) {
+  //update array with file size count from DB
+  List resultSet = fileCountBySizeDao.findAll();
+  int index = 0;
+ 

[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=289219=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289219
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 05/Aug/19 21:45
Start Date: 05/Aug/19 21:45
Worklog Time Spent: 10m 
  Work Description: shwetayakkali commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r310805057
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -0,0 +1,254 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import com.google.inject.Inject;
+import org.apache.commons.lang3.tuple.ImmutablePair;
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+
+/**
+ * Class to iterate over the OM DB and store the counts of existing/new
+ * files binned into ranges (1KB, 10Kb..,10MB,..1PB) to the Recon
+ * fileSize DB.
+ */
+public class FileSizeCountTask extends ReconDBUpdateTask {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(FileSizeCountTask.class);
+
+  private int maxBinSize;
+  private long maxFileSizeUpperBound = 1125899906842624L; // 1 PB
+  private long[] upperBoundCount = new long[maxBinSize];
+  private long ONE_KB = 1024L;
+  private Collection tables = new ArrayList<>();
+  private FileCountBySizeDao fileCountBySizeDao;
+
+  @Inject
+  public FileSizeCountTask(OMMetadataManager omMetadataManager,
+  Configuration sqlConfiguration) {
+super("FileSizeCountTask");
+try {
+  tables.add(omMetadataManager.getKeyTable().getName());
+  fileCountBySizeDao = new FileCountBySizeDao(sqlConfiguration);
+} catch (Exception e) {
+  LOG.error("Unable to fetch Key Table updates ", e);
+}
+  }
+
+  protected long getOneKB() {
+return ONE_KB;
+  }
+
+  protected long getMaxFileSizeUpperBound() {
+return maxFileSizeUpperBound;
+  }
+
+  protected int getMaxBinSize() {
+return maxBinSize;
+  }
+
+  /**
+   * Read the Keys from OM snapshot DB and calculate the upper bound of
+   * File Size it belongs to.
+   *
+   * @param omMetadataManager OM Metadata instance.
+   * @return Pair
+   */
+  @Override
+  public Pair reprocess(OMMetadataManager omMetadataManager) {
+LOG.info("Starting a 'reprocess' run of FileSizeCountTask.");
+
+fetchUpperBoundCount("reprocess");
+
+Table omKeyInfoTable = omMetadataManager.getKeyTable();
+try (TableIterator>
+keyIter = omKeyInfoTable.iterator()) {
+  while (keyIter.hasNext()) {
+Table.KeyValue kv = keyIter.next();
+countFileSize(kv.getValue());
+  }
+} catch (IOException ioEx) {
+  LOG.error("Unable to populate File Size Count in Recon DB. ", ioEx);
+  return new ImmutablePair<>(getTaskName(), false);
+} finally {
+  populateFileCountBySizeDB();
+}
+
+LOG.info("Completed a 'reprocess' run of FileSizeCountTask.");
+return new ImmutablePair<>(getTaskName(), true);
+  }
+
+  void setMaxBinSize() {
+maxBinSize = (int)(long) (Math.log(getMaxFileSizeUpperBound())
+/Math.log(2)) - 10;
+maxBinSize += 2;  // extra bin to add files > 1PB.
+  }
+
+  void fetchUpperBoundCount(String type) {
+setMaxBinSize();
+if (type.equals("process")) {
+  //update array with file size count from DB
+  List resultSet = fileCountBySizeDao.findAll();
+  int index = 

[jira] [Commented] (HDFS-9924) [umbrella] Nonblocking HDFS Access

2019-08-05 Thread Jorge Machado (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900426#comment-16900426
 ] 

Jorge Machado commented on HDFS-9924:
-

Any ideas how to improve the hdfs dfs put command with async io ? Or I'm I in 
the wrong thread. From what I saw here 
[https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOUtils.java#L129]
 Copy Commands still uses blocking io right ?

> [umbrella] Nonblocking HDFS Access
> --
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Duo Zhang
>Priority: Major
> Attachments: Async-HDFS-Performance-Report.pdf, 
> AsyncHdfs20160510.pdf, HDFS-9924-POC.patch
>
>
> This is an umbrella JIRA for supporting Nonblocking HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support nonblocking calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-916) Rewrite DFSOutputStream to use a single thread with NIO

2019-08-05 Thread Jorge Machado (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900421#comment-16900421
 ] 

Jorge Machado commented on HDFS-916:


Hi Guys, I know this is pretty old but is there any status on this ? We are 
transferring like 30TB via hdfs dfs copyFromLocal to a hadoop Cluster, 
Currently we have the cpus as bottleneck...  

> Rewrite DFSOutputStream to use a single thread with NIO
> ---
>
> Key: HDFS-916
> URL: https://issues.apache.org/jira/browse/HDFS-916
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Priority: Major
>
> The DFS write pipeline code has some really hairy multi-threaded 
> synchronization. There have been a lot of bugs produced by this (HDFS-101, 
> HDFS-793, HDFS-915, tens of others) since it's very hard to understand the 
> message passing, lock sharing, and interruption properties. The reason for 
> the multiple threads is to be able to simultaneously send and receive. If 
> instead of using multiple threads, it used nonblocking IO, I think the whole 
> thing would be a lot less error prone.
> I think we could do this in two halves: one half is the DFSOutputStream. The 
> other half is BlockReceiver. I opened this JIRA first as I think it's simpler 
> (only one TCP connection to deal with, rather than an up and downstream)
> Opinions? Am I crazy? I would like to see some agreement on the idea before I 
> spend time writing code.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-08-05 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900415#comment-16900415
 ] 

Siddharth Wagle commented on HDFS-2470:
---

05 => checkstyle fixes in addition to what is fixed in 04.

> NN should automatically set permissions on dfs.namenode.*.dir
> -
>
> Key: HDFS-2470
> URL: https://issues.apache.org/jira/browse/HDFS-2470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Aaron T. Myers
>Assignee: Siddharth Wagle
>Priority: Major
> Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, 
> HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch
>
>
> Much as the DN currently sets the correct permissions for the 
> dfs.datanode.data.dir, the NN should do the same for the 
> dfs.namenode.(name|edit).dir.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-08-05 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDFS-2470:
--
Attachment: HDFS-2470.05.patch

> NN should automatically set permissions on dfs.namenode.*.dir
> -
>
> Key: HDFS-2470
> URL: https://issues.apache.org/jira/browse/HDFS-2470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Aaron T. Myers
>Assignee: Siddharth Wagle
>Priority: Major
> Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, 
> HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch
>
>
> Much as the DN currently sets the correct permissions for the 
> dfs.datanode.data.dir, the NN should do the same for the 
> dfs.namenode.(name|edit).dir.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1488) Scm cli command to start/stop replication manager

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1488?focusedWorklogId=289201=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289201
 ]

ASF GitHub Bot logged work on HDDS-1488:


Author: ASF GitHub Bot
Created on: 05/Aug/19 21:09
Start Date: 05/Aug/19 21:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1221: HDDS-1488. Scm 
cli command to start/stop replication manager.
URL: https://github.com/apache/hadoop/pull/1221#issuecomment-518401232
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 49 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for branch |
   | +1 | mvninstall | 617 | trunk passed |
   | +1 | compile | 395 | trunk passed |
   | +1 | checkstyle | 77 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 865 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 168 | trunk passed |
   | 0 | spotbugs | 469 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 707 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | +1 | mvninstall | 558 | the patch passed |
   | +1 | compile | 384 | the patch passed |
   | +1 | cc | 384 | the patch passed |
   | +1 | javac | 384 | the patch passed |
   | +1 | checkstyle | 69 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 686 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | the patch passed |
   | +1 | findbugs | 734 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 299 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2091 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 8102 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1221/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1221 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 47992829ceda 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d6697da |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1221/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1221/5/testReport/ |
   | Max. process+thread count | 5341 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/client hadoop-hdds/server-scm 
hadoop-hdds/tools U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1221/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289201)
Time Spent: 2h  (was: 1h 50m)

> Scm cli command to start/stop replication manager
> -
>
> Key: HDDS-1488
> URL: https://issues.apache.org/jira/browse/HDDS-1488
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>

[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=289197=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289197
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 05/Aug/19 20:59
Start Date: 05/Aug/19 20:59
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #1146: HDDS-1366. Add 
ability in Recon to track the number of small files in an Ozone Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r310789626
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -0,0 +1,254 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import com.google.inject.Inject;
+import org.apache.commons.lang3.tuple.ImmutablePair;
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+
+/**
+ * Class to iterate over the OM DB and store the counts of existing/new
+ * files binned into ranges (1KB, 10Kb..,10MB,..1PB) to the Recon
+ * fileSize DB.
+ */
+public class FileSizeCountTask extends ReconDBUpdateTask {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(FileSizeCountTask.class);
+
+  private int maxBinSize;
+  private long maxFileSizeUpperBound = 1125899906842624L; // 1 PB
+  private long[] upperBoundCount = new long[maxBinSize];
+  private long ONE_KB = 1024L;
+  private Collection tables = new ArrayList<>();
+  private FileCountBySizeDao fileCountBySizeDao;
+
+  @Inject
+  public FileSizeCountTask(OMMetadataManager omMetadataManager,
+  Configuration sqlConfiguration) {
+super("FileSizeCountTask");
+try {
+  tables.add(omMetadataManager.getKeyTable().getName());
+  fileCountBySizeDao = new FileCountBySizeDao(sqlConfiguration);
+} catch (Exception e) {
+  LOG.error("Unable to fetch Key Table updates ", e);
+}
+  }
+
+  protected long getOneKB() {
+return ONE_KB;
+  }
+
+  protected long getMaxFileSizeUpperBound() {
+return maxFileSizeUpperBound;
+  }
+
+  protected int getMaxBinSize() {
+return maxBinSize;
+  }
+
+  /**
+   * Read the Keys from OM snapshot DB and calculate the upper bound of
+   * File Size it belongs to.
+   *
+   * @param omMetadataManager OM Metadata instance.
+   * @return Pair
+   */
+  @Override
+  public Pair reprocess(OMMetadataManager omMetadataManager) {
+LOG.info("Starting a 'reprocess' run of FileSizeCountTask.");
+
+fetchUpperBoundCount("reprocess");
+
+Table omKeyInfoTable = omMetadataManager.getKeyTable();
+try (TableIterator>
+keyIter = omKeyInfoTable.iterator()) {
+  while (keyIter.hasNext()) {
+Table.KeyValue kv = keyIter.next();
+countFileSize(kv.getValue());
+  }
+} catch (IOException ioEx) {
+  LOG.error("Unable to populate File Size Count in Recon DB. ", ioEx);
+  return new ImmutablePair<>(getTaskName(), false);
+} finally {
+  populateFileCountBySizeDB();
+}
+
+LOG.info("Completed a 'reprocess' run of FileSizeCountTask.");
+return new ImmutablePair<>(getTaskName(), true);
+  }
+
+  void setMaxBinSize() {
+maxBinSize = (int)(long) (Math.log(getMaxFileSizeUpperBound())
+/Math.log(2)) - 10;
+maxBinSize += 2;  // extra bin to add files > 1PB.
+  }
+
+  void fetchUpperBoundCount(String type) {
+setMaxBinSize();
+if (type.equals("process")) {
+  //update array with file size count from DB
+  List resultSet = fileCountBySizeDao.findAll();
+  int index = 0;
+ 

[jira] [Commented] (HDFS-13101) Yet another fsimage corruption related to snapshot

2019-08-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900398#comment-16900398
 ] 

Hadoop QA commented on HDFS-13101:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 10 new + 154 unchanged - 0 fixed = 164 total (was 154) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}181m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDecommission |
|   | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-13101 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976729/HDFS-13101.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8b45490bccf3 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c589983 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27407/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27407/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test 

[jira] [Work logged] (HDDS-1488) Scm cli command to start/stop replication manager

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1488?focusedWorklogId=289181=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289181
 ]

ASF GitHub Bot logged work on HDDS-1488:


Author: ASF GitHub Bot
Created on: 05/Aug/19 20:36
Start Date: 05/Aug/19 20:36
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1221: HDDS-1488. Scm 
cli command to start/stop replication manager.
URL: https://github.com/apache/hadoop/pull/1221#issuecomment-518389710
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for branch |
   | +1 | mvninstall | 583 | trunk passed |
   | +1 | compile | 365 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 830 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | trunk passed |
   | 0 | spotbugs | 422 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 613 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | +1 | mvninstall | 546 | the patch passed |
   | +1 | compile | 358 | the patch passed |
   | +1 | cc | 358 | the patch passed |
   | +1 | javac | 358 | the patch passed |
   | +1 | checkstyle | 79 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 650 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 150 | the patch passed |
   | +1 | findbugs | 626 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 318 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2394 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 8052 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1221/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1221 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 0a3c21a34f04 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d6697da |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1221/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1221/3/testReport/ |
   | Max. process+thread count | 4793 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/client hadoop-hdds/server-scm 
hadoop-hdds/tools U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1221/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289181)
Time Spent: 1h 50m  (was: 1h 40m)

> Scm cli command to start/stop replication 

[jira] [Commented] (HDFS-13101) Yet another fsimage corruption related to snapshot

2019-08-05 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900382#comment-16900382
 ] 

Tsz Wo Nicholas Sze commented on HDFS-13101:


+1 the 003 patch looks good.  Pending Jenkins.

> Yet another fsimage corruption related to snapshot
> --
>
> Key: HDFS-13101
> URL: https://issues.apache.org/jira/browse/HDFS-13101
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Yongjun Zhang
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13101.001.patch, HDFS-13101.002.patch, 
> HDFS-13101.003.patch, HDFS-13101.corruption_repro.patch, 
> HDFS-13101.corruption_repro_simplified.patch
>
>
> Lately we saw case similar to HDFS-9406, even though HDFS-9406 fix is 
> present, so it's likely another case not covered by the fix. We are currently 
> trying to collect good fsimage + editlogs to replay to reproduce it and 
> investigate. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14687) Standby Namenode never come out of safemode when EC files are being written.

2019-08-05 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900371#comment-16900371
 ] 

Surendra Singh Lilhore commented on HDFS-14687:
---

Fixed failed test case..

> Standby Namenode never come out of safemode when EC files are being written.
> 
>
> Key: HDFS-14687
> URL: https://issues.apache.org/jira/browse/HDFS-14687
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, namenode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Critical
> Attachments: HDFS-14687.001.patch, HDFS-14687.002.patch
>
>
> When huge number of EC files are being written and SBN is restarted then it 
> will never come out of safe mode and required blocks count getting increase.
> {noformat}
> The reported blocks 16658401 needs additional 1702 blocks to reach the 
> threshold 0.9 of total blocks 16660120.
> The reported blocks 16658659 needs additional 2935 blocks to reach the 
> threshold 0.9 of total blocks 16661611.
> The reported blocks 16659947 needs additional 3868 blocks to reach the 
> threshold 0.9 of total blocks 16663832.
> The reported blocks 1335 needs additional 5116 blocks to reach the 
> threshold 0.9 of total blocks 16671468.
> The reported blocks 16669311 needs additional 6384 blocks to reach the 
> threshold 0.9 of total blocks 16675712.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14687) Standby Namenode never come out of safemode when EC files are being written.

2019-08-05 Thread Surendra Singh Lilhore (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-14687:
--
Attachment: HDFS-14687.002.patch

> Standby Namenode never come out of safemode when EC files are being written.
> 
>
> Key: HDFS-14687
> URL: https://issues.apache.org/jira/browse/HDFS-14687
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, namenode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Critical
> Attachments: HDFS-14687.001.patch, HDFS-14687.002.patch
>
>
> When huge number of EC files are being written and SBN is restarted then it 
> will never come out of safe mode and required blocks count getting increase.
> {noformat}
> The reported blocks 16658401 needs additional 1702 blocks to reach the 
> threshold 0.9 of total blocks 16660120.
> The reported blocks 16658659 needs additional 2935 blocks to reach the 
> threshold 0.9 of total blocks 16661611.
> The reported blocks 16659947 needs additional 3868 blocks to reach the 
> threshold 0.9 of total blocks 16663832.
> The reported blocks 1335 needs additional 5116 blocks to reach the 
> threshold 0.9 of total blocks 16671468.
> The reported blocks 16669311 needs additional 6384 blocks to reach the 
> threshold 0.9 of total blocks 16675712.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14195) OIV: print out storage policy id in oiv Delimited output

2019-08-05 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900337#comment-16900337
 ] 

Wei-Chiu Chuang edited comment on HDFS-14195 at 8/5/19 7:24 PM:


Thanks [~suxingfate] really appreciate your work! And thanks [~adam.antal] for 
helping with the review!
Other than the checkstyle warnings, a few nits:
{code}
+ "-sp print storage policy.\n"
{code}
We should also state this is used by delimited output only. Storage policy is 
always exported for XML output.

We should also update the doc. Can be a separate jira.

{code}
BufferedReader reader = new BufferedReader(new FileReader(file));
{code}
the reader is not closed properly. This'll result in leaked open file 
descriptors.

{code}
  FSDataOutputStream o = hdfs.create(file);
  o.write(123);
  o.close();
{code}
You should ideally use try .. with () to ensure resource is not leaked upon 
failure. It is less critical for test code, but nice to have.

I think the patch is ready for commit after these cleanup.


was (Author: jojochuang):
Thanks [~suxingfate] really appreciate your work! And thanks [~adam.antal] for 
helping with the review!
Other than the checkstyle warnings, a few nits:
{code}
+ "-sp print storage policy.\n"
{code}
We should also state this is used by delimited output only. Storage policy is 
always exported for XML output.

We should also update the doc. Can be a separate jira.

{code}
BufferedReader reader = new BufferedReader(new FileReader(file));
{code}
the reader is not closed properly. This'll result in leaked open file 
descriptors.

{code}
  FSDataOutputStream o = hdfs.create(file);
  o.write(123);
  o.close();
{code}
You should ideally use try .. with () to ensure resource is not leaked upon 
failure. It is less critical for test code, but nice to have.

> OIV: print out storage policy id in oiv Delimited output
> 
>
> Key: HDFS-14195
> URL: https://issues.apache.org/jira/browse/HDFS-14195
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Reporter: Wang, Xinglong
>Assignee: Wang, Xinglong
>Priority: Minor
> Attachments: HDFS-14195.001.patch, HDFS-14195.002.patch, 
> HDFS-14195.003.patch, HDFS-14195.004.patch, HDFS-14195.005.patch, 
> HDFS-14195.006.patch, HDFS-14195.007.patch, HDFS-14195.008.patch
>
>
> There is lacking of a method to get all folders and files with sort of 
> specified storage policy via command line, like ALL_SSD type.
> By adding storage policy id to oiv output, it will help with oiv 
> post-analysis to have a overview of all folders/files with specified storage 
> policy and to apply internal regulation based on this information.
>  
> Currently, for PBImageXmlWriter.java, in HDFS-9835 it added function to print 
> out xattr which including storage policy already.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14195) OIV: print out storage policy id in oiv Delimited output

2019-08-05 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900337#comment-16900337
 ] 

Wei-Chiu Chuang commented on HDFS-14195:


Thanks [~suxingfate] really appreciate your work!
Other than the checkstyle warnings, a few nits:
{code}
+ "-sp print storage policy.\n"
{code}
We should also state this is used by delimited output only. Storage policy is 
always exported for XML output.

We should also update the doc. Can be a separate jira.

{code}
BufferedReader reader = new BufferedReader(new FileReader(file));
{code}
the reader is not closed properly. This'll result in leaked open file 
descriptors.

{code}
  FSDataOutputStream o = hdfs.create(file);
  o.write(123);
  o.close();
{code}
You should ideally use try .. with () to ensure resource is not leaked upon 
failure. It is less critical for test code, but nice to have.

> OIV: print out storage policy id in oiv Delimited output
> 
>
> Key: HDFS-14195
> URL: https://issues.apache.org/jira/browse/HDFS-14195
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Reporter: Wang, Xinglong
>Assignee: Wang, Xinglong
>Priority: Minor
> Attachments: HDFS-14195.001.patch, HDFS-14195.002.patch, 
> HDFS-14195.003.patch, HDFS-14195.004.patch, HDFS-14195.005.patch, 
> HDFS-14195.006.patch, HDFS-14195.007.patch, HDFS-14195.008.patch
>
>
> There is lacking of a method to get all folders and files with sort of 
> specified storage policy via command line, like ALL_SSD type.
> By adding storage policy id to oiv output, it will help with oiv 
> post-analysis to have a overview of all folders/files with specified storage 
> policy and to apply internal regulation based on this information.
>  
> Currently, for PBImageXmlWriter.java, in HDFS-9835 it added function to print 
> out xattr which including storage policy already.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14195) OIV: print out storage policy id in oiv Delimited output

2019-08-05 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900337#comment-16900337
 ] 

Wei-Chiu Chuang edited comment on HDFS-14195 at 8/5/19 7:23 PM:


Thanks [~suxingfate] really appreciate your work! And thanks [~adam.antal] for 
helping with the review!
Other than the checkstyle warnings, a few nits:
{code}
+ "-sp print storage policy.\n"
{code}
We should also state this is used by delimited output only. Storage policy is 
always exported for XML output.

We should also update the doc. Can be a separate jira.

{code}
BufferedReader reader = new BufferedReader(new FileReader(file));
{code}
the reader is not closed properly. This'll result in leaked open file 
descriptors.

{code}
  FSDataOutputStream o = hdfs.create(file);
  o.write(123);
  o.close();
{code}
You should ideally use try .. with () to ensure resource is not leaked upon 
failure. It is less critical for test code, but nice to have.


was (Author: jojochuang):
Thanks [~suxingfate] really appreciate your work!
Other than the checkstyle warnings, a few nits:
{code}
+ "-sp print storage policy.\n"
{code}
We should also state this is used by delimited output only. Storage policy is 
always exported for XML output.

We should also update the doc. Can be a separate jira.

{code}
BufferedReader reader = new BufferedReader(new FileReader(file));
{code}
the reader is not closed properly. This'll result in leaked open file 
descriptors.

{code}
  FSDataOutputStream o = hdfs.create(file);
  o.write(123);
  o.close();
{code}
You should ideally use try .. with () to ensure resource is not leaked upon 
failure. It is less critical for test code, but nice to have.

> OIV: print out storage policy id in oiv Delimited output
> 
>
> Key: HDFS-14195
> URL: https://issues.apache.org/jira/browse/HDFS-14195
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Reporter: Wang, Xinglong
>Assignee: Wang, Xinglong
>Priority: Minor
> Attachments: HDFS-14195.001.patch, HDFS-14195.002.patch, 
> HDFS-14195.003.patch, HDFS-14195.004.patch, HDFS-14195.005.patch, 
> HDFS-14195.006.patch, HDFS-14195.007.patch, HDFS-14195.008.patch
>
>
> There is lacking of a method to get all folders and files with sort of 
> specified storage policy via command line, like ALL_SSD type.
> By adding storage policy id to oiv output, it will help with oiv 
> post-analysis to have a overview of all folders/files with specified storage 
> policy and to apply internal regulation based on this information.
>  
> Currently, for PBImageXmlWriter.java, in HDFS-9835 it added function to print 
> out xattr which including storage policy already.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1901) Fix Ozone HTTP WebConsole Authentication

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1901?focusedWorklogId=289151=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289151
 ]

ASF GitHub Bot logged work on HDDS-1901:


Author: ASF GitHub Bot
Created on: 05/Aug/19 19:21
Start Date: 05/Aug/19 19:21
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1228: HDDS-1901. Fix 
Ozone HTTP WebConsole Authentication. Contributed by X…
URL: https://github.com/apache/hadoop/pull/1228#issuecomment-518364636
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 88 | Maven dependency ordering for branch |
   | +1 | mvninstall | 584 | trunk passed |
   | +1 | compile | 373 | trunk passed |
   | +1 | checkstyle | 80 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 778 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 169 | trunk passed |
   | 0 | spotbugs | 424 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 618 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 36 | Maven dependency ordering for patch |
   | +1 | mvninstall | 561 | the patch passed |
   | +1 | compile | 369 | the patch passed |
   | +1 | javac | 369 | the patch passed |
   | +1 | checkstyle | 84 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 635 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | the patch passed |
   | +1 | findbugs | 643 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 287 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1794 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 7605 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1228/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1228 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml shellcheck shelldocs |
   | uname | Linux cc84026f91f7 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 71aad60 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1228/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1228/1/testReport/ |
   | Max. process+thread count | 3939 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/docs hadoop-ozone/common 
hadoop-ozone/dist U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1228/1/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289151)
Time Spent: 20m  (was: 10m)

> Fix Ozone HTTP WebConsole Authentication
> 
>
> Key: HDDS-1901
> URL: 

[jira] [Commented] (HDFS-14204) Backport HDFS-12943 to branch-2

2019-08-05 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900326#comment-16900326
 ] 

Chen Liang commented on HDFS-14204:
---

Thanks for review [~shv]. Post v005 patch.

> Backport HDFS-12943 to branch-2
> ---
>
> Key: HDFS-14204
> URL: https://issues.apache.org/jira/browse/HDFS-14204
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14204-branch-2.001.patch, 
> HDFS-14204-branch-2.002.patch, HDFS-14204-branch-2.003.patch, 
> HDFS-14204-branch-2.004.patch, HDFS-14204-branch-2.005.patch
>
>
> Currently, consistent read from standby feature (HDFS-12943) is only in trunk 
> (branch-3). This JIRA aims to backport the feature to branch-2.  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14204) Backport HDFS-12943 to branch-2

2019-08-05 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14204:
--
Attachment: HDFS-14204-branch-2.005.patch

> Backport HDFS-12943 to branch-2
> ---
>
> Key: HDFS-14204
> URL: https://issues.apache.org/jira/browse/HDFS-14204
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14204-branch-2.001.patch, 
> HDFS-14204-branch-2.002.patch, HDFS-14204-branch-2.003.patch, 
> HDFS-14204-branch-2.004.patch, HDFS-14204-branch-2.005.patch
>
>
> Currently, consistent read from standby feature (HDFS-12943) is only in trunk 
> (branch-3). This JIRA aims to backport the feature to branch-2.  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=289135=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289135
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 05/Aug/19 18:59
Start Date: 05/Aug/19 18:59
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-518357010
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 89 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 656 | trunk passed |
   | +1 | compile | 391 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1077 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 219 | trunk passed |
   | 0 | spotbugs | 530 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 798 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 672 | the patch passed |
   | +1 | compile | 446 | the patch passed |
   | +1 | javac | 446 | the patch passed |
   | +1 | checkstyle | 106 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 879 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 211 | the patch passed |
   | +1 | findbugs | 812 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 408 | hadoop-hdds in the patch passed. |
   | -1 | unit | 301 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 57 | The patch does not generate ASF License warnings. |
   | | | 7408 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.request.s3.bucket.TestS3BucketCreateRequest |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 67e557db9c30 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 71aad60 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/7/testReport/ |
   | Max. process+thread count | 1303 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289135)
Time Spent: 2h  (was: 1h 50m)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This Jira is to use new HA code of OM in Non-HA code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: 

[jira] [Work logged] (HDDS-1810) SCM command to Activate and Deactivate pipelines

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1810?focusedWorklogId=289134=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289134
 ]

ASF GitHub Bot logged work on HDDS-1810:


Author: ASF GitHub Bot
Created on: 05/Aug/19 18:58
Start Date: 05/Aug/19 18:58
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on issue #1224: HDDS-1810. SCM 
command to Activate and Deactivate pipelines.
URL: https://github.com/apache/hadoop/pull/1224#issuecomment-518356643
 
 
   > Will deactive trigger close containers on the pipeline?
   
   No, we will not close the containers. We will not do anything to the 
pipeline other than moving it out of allocation path.
   
   > what is the difference between close and recreate? 
   
   Activate and de-activate is used mainly for debugging purpose, we don't want 
to create a new pipeline here.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289134)
Time Spent: 1h  (was: 50m)

> SCM command to Activate and Deactivate pipelines
> 
>
> Key: HDDS-1810
> URL: https://issues.apache.org/jira/browse/HDDS-1810
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: SCM, SCM Client
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> It will be useful to have scm command to temporarily deactivate and 
> re-activate a pipeline. This will help us a lot in debugging a pipeline.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1810) SCM command to Activate and Deactivate pipelines

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1810?focusedWorklogId=289132=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289132
 ]

ASF GitHub Bot logged work on HDDS-1810:


Author: ASF GitHub Bot
Created on: 05/Aug/19 18:58
Start Date: 05/Aug/19 18:58
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on issue #1224: HDDS-1810. SCM 
command to Activate and Deactivate pipelines.
URL: https://github.com/apache/hadoop/pull/1224#issuecomment-518356643
 
 
   > Will deactive trigger close containers on the pipeline?
   
   No, we will not close the containers. We will not do anything to the 
pipeline other than moving it out of allocation path.
   
   > what is the difference between close and recreate? 
   
   Activate and de-activate is used mainly for debugging a pipeline, we don't 
want to create a new pipeline here.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289132)
Time Spent: 50m  (was: 40m)

> SCM command to Activate and Deactivate pipelines
> 
>
> Key: HDDS-1810
> URL: https://issues.apache.org/jira/browse/HDDS-1810
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: SCM, SCM Client
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> It will be useful to have scm command to temporarily deactivate and 
> re-activate a pipeline. This will help us a lot in debugging a pipeline.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1810) SCM command to Activate and Deactivate pipelines

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1810?focusedWorklogId=289131=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289131
 ]

ASF GitHub Bot logged work on HDDS-1810:


Author: ASF GitHub Bot
Created on: 05/Aug/19 18:58
Start Date: 05/Aug/19 18:58
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on issue #1224: HDDS-1810. SCM 
command to Activate and Deactivate pipelines.
URL: https://github.com/apache/hadoop/pull/1224#issuecomment-518356643
 
 
   > Will deactive trigger close containers on the pipeline?
   No, we will not close the containers. We will not do anything to the 
pipeline other than moving it out of allocation path.
   
   > what is the difference between close and recreate? 
   Activate and de-activate is used mainly for debugging a pipeline, we don't 
want to create a new pipeline here.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289131)
Time Spent: 40m  (was: 0.5h)

> SCM command to Activate and Deactivate pipelines
> 
>
> Key: HDDS-1810
> URL: https://issues.apache.org/jira/browse/HDDS-1810
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: SCM, SCM Client
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> It will be useful to have scm command to temporarily deactivate and 
> re-activate a pipeline. This will help us a lot in debugging a pipeline.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=289130=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289130
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 05/Aug/19 18:46
Start Date: 05/Aug/19 18:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-518352807
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 47 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 619 | trunk passed |
   | +1 | compile | 352 | trunk passed |
   | +1 | checkstyle | 67 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 826 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 152 | trunk passed |
   | 0 | spotbugs | 473 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 687 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 599 | the patch passed |
   | +1 | compile | 378 | the patch passed |
   | +1 | javac | 378 | the patch passed |
   | +1 | checkstyle | 72 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 654 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | the patch passed |
   | +1 | findbugs | 680 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 304 | hadoop-hdds in the patch passed. |
   | -1 | unit | 187 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 5992 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.request.s3.bucket.TestS3BucketCreateRequest |
   |   | hadoop.ozone.security.TestOzoneDelegationTokenSecretManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 04faebb4bd36 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 71aad60 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/8/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/8/testReport/ |
   | Max. process+thread count | 1344 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289130)
Time Spent: 1h 50m  (was: 1h 40m)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> This Jira is to use new HA code of OM in Non-HA code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=289124=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289124
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 05/Aug/19 18:34
Start Date: 05/Aug/19 18:34
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-518348571
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 46 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 599 | trunk passed |
   | +1 | compile | 364 | trunk passed |
   | +1 | checkstyle | 65 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 827 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | trunk passed |
   | 0 | spotbugs | 483 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 708 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 588 | the patch passed |
   | +1 | compile | 394 | the patch passed |
   | +1 | javac | 394 | the patch passed |
   | +1 | checkstyle | 68 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 680 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | the patch passed |
   | +1 | findbugs | 703 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 295 | hadoop-hdds in the patch passed. |
   | -1 | unit | 190 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 6050 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.request.s3.bucket.TestS3BucketCreateRequest |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 73c29aed3e0b 4.4.0-157-generic #185-Ubuntu SMP Tue Jul 23 
09:17:01 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 71aad60 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/6/testReport/ |
   | Max. process+thread count | 1326 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289124)
Time Spent: 1h 40m  (was: 1.5h)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> This Jira is to use new HA code of OM in Non-HA code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: 

[jira] [Work logged] (HDDS-1810) SCM command to Activate and Deactivate pipelines

2019-08-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1810?focusedWorklogId=289093=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289093
 ]

ASF GitHub Bot logged work on HDDS-1810:


Author: ASF GitHub Bot
Created on: 05/Aug/19 18:10
Start Date: 05/Aug/19 18:10
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #1224: HDDS-1810. SCM 
command to Activate and Deactivate pipelines.
URL: https://github.com/apache/hadoop/pull/1224#issuecomment-518340119
 
 
   Will deactive trigger close containers on the pipeline? what is the 
difference between close and recreate? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289093)
Time Spent: 0.5h  (was: 20m)

> SCM command to Activate and Deactivate pipelines
> 
>
> Key: HDDS-1810
> URL: https://issues.apache.org/jira/browse/HDDS-1810
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: SCM, SCM Client
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> It will be useful to have scm command to temporarily deactivate and 
> re-activate a pipeline. This will help us a lot in debugging a pipeline.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14697) Backport HDFS-14513 to branch-2

2019-08-05 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900281#comment-16900281
 ] 

Íñigo Goiri commented on HDFS-14697:


Basically the changes are in the logging.
+1 on  [^HDFS-14697.branch-2.001.patch].

> Backport HDFS-14513 to branch-2
> ---
>
> Key: HDFS-14697
> URL: https://issues.apache.org/jira/browse/HDFS-14697
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Minor
> Attachments: HDFS-14697.branch-2.001.patch
>
>
> Backport HDFS-14513 "FSImage which is saving should be clean while NameNode 
> shutdown." to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14450) Erasure Coding: decommissioning datanodes cause replicate a large number of duplicate EC internal blocks

2019-08-05 Thread Wu Weiwei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wu Weiwei updated HDFS-14450:
-
Attachment: HDFS-14450-000.patch

> Erasure Coding: decommissioning datanodes cause replicate a large number of 
> duplicate EC internal blocks
> 
>
> Key: HDFS-14450
> URL: https://issues.apache.org/jira/browse/HDFS-14450
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wu Weiwei
>Assignee: Wu Weiwei
>Priority: Major
> Attachments: HDFS-14450-000.patch
>
>
> {code:java}
> //  [WARN] [RedundancyMonitor] : Failed to place enough replicas, still in 
> need of 2 to reach 167 (unavailableStorages=[DISK, ARCHIVE], 
> storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
> creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All 
> required storage types are unavailable:  unavailableStorages=[DISK, ARCHIVE], 
> storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
> creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
> {code}
> In a large-scale cluster, decommissioning large-scale datanodes cause EC 
> block groups to replicate a large number of duplicate internal blocks.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14608) DataNode$DataTransfer should be named

2019-08-05 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900277#comment-16900277
 ] 

Íñigo Goiri commented on HDFS-14608:


A little suspicious to have so many failed unit tests.
I checked and it doesn't look like those check for the thread name or anything 
like that.

> DataNode$DataTransfer should be named
> -
>
> Key: HDFS-14608
> URL: https://issues.apache.org/jira/browse/HDFS-14608
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14608.000.patch, HDFS-14608.001.patch
>
>
> Currently, the {{DataTransfer}} thread has no name and it just outputs the 
> default {{toString()}}.
> This shows in the logs in jstack as something like:
> {code}
> 2019-06-25 11:01:01,211 INFO 
> [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@609ed67a] 
> org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at 
> CO4AEAPC1AF:10010: Transmitted 
> BP-1191059133-10.1.2.3-145702348:blk_1113379522_69745835 
> (numBytes=485214) to 10.1.2.3/10.1.2.3:10010
> {code}
> As this uses the {{Daemon}} class, the name is set based on:
> {code}
>   public Daemon(Runnable runnable) {
> super(runnable);
> this.runnable = runnable;
> this.setName(((Object)runnable).toString());
>   }
> {code}
> We should implement toString to at least have the name of the block being 
> transfferred or something similar to what DataXceiver does (e.g., HDFS-3375).



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1896) Suppress WARN log from NetworkTopology#getDistanceCost

2019-08-05 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1896:
-
Fix Version/s: (was: 0.5.0)
   0.4.1

> Suppress WARN log from NetworkTopology#getDistanceCost 
> ---
>
> Key: HDDS-1896
> URL: https://issues.apache.org/jira/browse/HDDS-1896
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When RackAwareness is enabled and client from outside, the distance 
> calculation flood SCM log with the following messages. This ticket is opened 
> to suppress the WARN log.
> {code}
> 2019-08-01 23:08:05,011 WARN org.apache.hadoop.hdds.scm.net.NetworkTopology: 
> One of the nodes is outside of network topology
> 2019-08-01 23:08:05,011 WARN org.apache.hadoop.hdds.scm.net.NetworkTopology: 
> One of the nodes is outside of network topology
> 2019-08-01 23:08:05,011 WARN org.apache.hadoop.hdds.scm.net.NetworkTopology: 
> One of the nodes is outside of network topology
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1907) TestOzoneRpcClientWithRatis is failing with ACL errors

2019-08-05 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900264#comment-16900264
 ] 

Bharat Viswanadham edited comment on HDDS-1907 at 8/5/19 5:41 PM:
--

These tests can be fixed once acl is implemented as part of HA implementation.

(For 0.4.1 HA is not a targeted feature, we can disable the tests).

 


was (Author: bharatviswa):
These tests can be fixed once acl is implemented as part of HA implementation.

 

> TestOzoneRpcClientWithRatis is failing with ACL errors
> --
>
> Key: HDDS-1907
> URL: https://issues.apache.org/jira/browse/HDDS-1907
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Priority: Major
>
> {noformat}
> [ERROR] 
> testNativeAclsForKey(org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis)
>   Time elapsed: 0.176 s  <<< FAILURE!
> java.lang.AssertionError: Current acls:,[user:nvadivelu:a[ACCESS], 
> group:staff:a[ACCESS], group:everyone:a[ACCESS], 
> group:localaccounts:a[ACCESS], group:_appserverusr:a[ACCESS], 
> group:admin:a[ACCESS], group:_appserveradm:a[ACCESS], 
> group:_lpadmin:a[ACCESS], group:com.apple.sharepoint.group.1:a[ACCESS], 
> group:com.apple.sharepoint.group.2:a[ACCESS], group:_appstore:a[ACCESS], 
> group:_lpoperator:a[ACCESS], group:_developer:a[ACCESS], 
> group:_analyticsusers:a[ACCESS], group:com.apple.access_ftp:a[ACCESS], 
> group:com.apple.access_screensharing:a[ACCESS], 
> group:com.apple.access_ssh:a[ACCESS], 
> group:com.apple.sharepoint.group.3:a[ACCESS]] 
> inheritedUserAcl:user:remoteUser:r[ACCESS]
> [ERROR] 
> testNativeAclsForBucket(org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis)
>   Time elapsed: 0.074 s  <<< FAILURE!
> java.lang.AssertionError
> [ERROR] 
> testNativeAclsForPrefix(org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis)
>   Time elapsed: 0.061 s  <<< FAILURE!
> java.lang.AssertionError: Current acls:,[user:nvadivelu:a[ACCESS], 
> group:staff:a[ACCESS], group:everyone:a[ACCESS], 
> group:localaccounts:a[ACCESS], group:_appserverusr:a[ACCESS], 
> group:admin:a[ACCESS], group:_appserveradm:a[ACCESS], 
> group:_lpadmin:a[ACCESS], group:com.apple.sharepoint.group.1:a[ACCESS], 
> group:com.apple.sharepoint.group.2:a[ACCESS], group:_appstore:a[ACCESS], 
> group:_lpoperator:a[ACCESS], group:_developer:a[ACCESS], 
> group:_analyticsusers:a[ACCESS], group:com.apple.access_ftp:a[ACCESS], 
> group:com.apple.access_screensharing:a[ACCESS], 
> group:com.apple.access_ssh:a[ACCESS], 
> group:com.apple.sharepoint.group.3:a[ACCESS]] 
> inheritedUserAcl:user:remoteUser:r[ACCESS]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1907) TestOzoneRpcClientWithRatis is failing with ACL errors

2019-08-05 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900264#comment-16900264
 ] 

Bharat Viswanadham commented on HDDS-1907:
--

These tests can be fixed once acl is implemented as part of HA implementation.

 

> TestOzoneRpcClientWithRatis is failing with ACL errors
> --
>
> Key: HDDS-1907
> URL: https://issues.apache.org/jira/browse/HDDS-1907
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Priority: Major
>
> {noformat}
> [ERROR] 
> testNativeAclsForKey(org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis)
>   Time elapsed: 0.176 s  <<< FAILURE!
> java.lang.AssertionError: Current acls:,[user:nvadivelu:a[ACCESS], 
> group:staff:a[ACCESS], group:everyone:a[ACCESS], 
> group:localaccounts:a[ACCESS], group:_appserverusr:a[ACCESS], 
> group:admin:a[ACCESS], group:_appserveradm:a[ACCESS], 
> group:_lpadmin:a[ACCESS], group:com.apple.sharepoint.group.1:a[ACCESS], 
> group:com.apple.sharepoint.group.2:a[ACCESS], group:_appstore:a[ACCESS], 
> group:_lpoperator:a[ACCESS], group:_developer:a[ACCESS], 
> group:_analyticsusers:a[ACCESS], group:com.apple.access_ftp:a[ACCESS], 
> group:com.apple.access_screensharing:a[ACCESS], 
> group:com.apple.access_ssh:a[ACCESS], 
> group:com.apple.sharepoint.group.3:a[ACCESS]] 
> inheritedUserAcl:user:remoteUser:r[ACCESS]
> [ERROR] 
> testNativeAclsForBucket(org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis)
>   Time elapsed: 0.074 s  <<< FAILURE!
> java.lang.AssertionError
> [ERROR] 
> testNativeAclsForPrefix(org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis)
>   Time elapsed: 0.061 s  <<< FAILURE!
> java.lang.AssertionError: Current acls:,[user:nvadivelu:a[ACCESS], 
> group:staff:a[ACCESS], group:everyone:a[ACCESS], 
> group:localaccounts:a[ACCESS], group:_appserverusr:a[ACCESS], 
> group:admin:a[ACCESS], group:_appserveradm:a[ACCESS], 
> group:_lpadmin:a[ACCESS], group:com.apple.sharepoint.group.1:a[ACCESS], 
> group:com.apple.sharepoint.group.2:a[ACCESS], group:_appstore:a[ACCESS], 
> group:_lpoperator:a[ACCESS], group:_developer:a[ACCESS], 
> group:_analyticsusers:a[ACCESS], group:com.apple.access_ftp:a[ACCESS], 
> group:com.apple.access_screensharing:a[ACCESS], 
> group:com.apple.access_ssh:a[ACCESS], 
> group:com.apple.sharepoint.group.3:a[ACCESS]] 
> inheritedUserAcl:user:remoteUser:r[ACCESS]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1901) Fix Ozone HTTP WebConsole Authentication

2019-08-05 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1901:
-
Status: Patch Available  (was: Open)

> Fix Ozone HTTP WebConsole Authentication
> 
>
> Key: HDDS-1901
> URL: https://issues.apache.org/jira/browse/HDDS-1901
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This was found during integration testing where the http authentication is 
> enabled but anonymous can still access the ozone http web console like 
> scm:9876 or om:9874. This can be reproed with the following configurations 
> added to the ozonesecure docker-compose.
> {code}
> CORE-SITE.XML_hadoop.http.authentication.simple.anonymous.allowed=false
> CORE-SITE.XML_hadoop.http.authentication.signature.secret.file=/etc/security/http_secret
> CORE-SITE.XML_hadoop.http.authentication.type=kerberos
> CORE-SITE.XML_hadoop.http.authentication.kerberos.principal=HTTP/_h...@example.com
> CORE-SITE.XML_hadoop.http.authentication.kerberos.keytab=/etc/security/keytabs/HTTP.keytab
> CORE-SITE.XML_hadoop.http.filter.initializers=org.apache.hadoop.security.AuthenticationFilterInitializer
> {code}
> After debugging into the KerberosAuthenticationFilter, the root cause is the 
> name of the keytab does not follow the AuthenticationFilter tradition. The 
> fix is to change 
> hdds.scm.http.kerberos.keytab.file to hdds.scm.http.kerberos.keytab and
> hdds.om.http.kerberos.keytab.file to hdds.om.http.kerberos.keytab
> I will also add an integration test for this under ozonesecure 
> docker-compose. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1893) Fix bug in removeAcl in Bucket

2019-08-05 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900259#comment-16900259
 ] 

Hudson commented on HDDS-1893:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17041 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17041/])
HDDS-1893. Fix bug in removeAcl in Bucket. (#1216) (xyao: rev 
c589983e9cebe72f6db4f0141b9bb1b77c052838)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java


> Fix bug in removeAcl in Bucket
> --
>
> Key: HDDS-1893
> URL: https://issues.apache.org/jira/browse/HDDS-1893
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, Security
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {code:java}
> // When we are removing subset of rights from existing acl.
> for(OzoneAcl a: bucketInfo.getAcls()) {
>  if(a.getName().equals(acl.getName()) &&
>  a.getType().equals(acl.getType())) {
>  BitSet bits = (BitSet) acl.getAclBitSet().clone();
>  bits.and(a.getAclBitSet());
>  if (bits.equals(ZERO_BITSET)) {
>  return false;
>  }
>  bits = (BitSet) acl.getAclBitSet().clone();
>  bits.and(a.getAclBitSet());
>  a.getAclBitSet().xor(bits);
>  if(a.getAclBitSet().equals(ZERO_BITSET)) {
>  bucketInfo.getAcls().remove(a);
>  }
>  break;
>  } else {
>  return false;
>  }{code}
> In for loop, if first one is not matching with name and type, in else we 
> return false. We should iterate entire acl list and then return response.
> }



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14476) lock too long when fix inconsistent blocks between disk and in-memory

2019-08-05 Thread Sean Chow (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900255#comment-16900255
 ] 

Sean Chow commented on HDFS-14476:
--

Thanks for reviewing. The already added patch works for banch2.6~branch2.9, so 
I will add a rebased patch tomorrow.

By the way, do I need to add this same patch to branch-3? I see the 
{{reconcile}} is a little diffrent.

> lock too long when fix inconsistent blocks between disk and in-memory
> -
>
> Key: HDFS-14476
> URL: https://issues.apache.org/jira/browse/HDFS-14476
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Sean Chow
>Assignee: Sean Chow
>Priority: Major
> Attachments: HDFS-14476.00.patch, datanode-with-patch-14476.png
>
>
> When directoryScanner have the results of differences between disk and 
> in-memory blocks. it will try to run {{checkAndUpdate}} to fix it. However 
> {{FsDatasetImpl.checkAndUpdate}} is a synchronized call
> As I have about 6millions blocks for every datanodes and every 6hours' scan 
> will have about 25000 abnormal blocks to fix. That leads to a long lock 
> holding FsDatasetImpl object.
> let's assume every block need 10ms to fix(because of latency of SAS disk), 
> that will cost 250 seconds to finish. That means all reads and writes will be 
> blocked for 3mins for that datanode.
>  
> {code:java}
> 2019-05-06 08:06:51,704 INFO 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
> BP-1644920766-10.223.143.220-1450099987967 Total blocks: 6850197, missing 
> metadata files:23574, missing block files:23574, missing blocks in 
> memory:47625, mismatched blocks:0
> ...
> 2019-05-06 08:16:41,625 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Took 588402ms to process 1 commands from NN
> {code}
> Take long time to process command from nn because threads are blocked. And 
> namenode will see long lastContact time for this datanode.
> Maybe this affect all hdfs versions.
> *how to fix:*
> just like process invalidate command from namenode with 1000 batch size, fix 
> these abnormal block should be handled with batch too and sleep 2 seconds 
> between the batch to allow normal reading/writing blocks.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >