[jira] [Commented] (HDFS-14887) RBF: In Router Web UI, Observer Namenode Information displaying as Unavailable

2019-10-02 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943346#comment-16943346
 ] 

Hadoop QA commented on HDFS-14887:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 12s{color} 
| {color:red} HDFS-14887 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14887 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28010/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: In Router Web UI, Observer Namenode Information displaying as Unavailable
> --
>
> Key: HDFS-14887
> URL: https://issues.apache.org/jira/browse/HDFS-14887
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: 14887.after.png, 14887.before.png, HDFS-14887.001.patch
>
>
> In Router Web UI, Observer Namenode Information displaying as Unavailable.
> We should show a proper icon for them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14887) RBF: In Router Web UI, Observer Namenode Information displaying as Unavailable

2019-10-02 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14887:
-
Attachment: 14887.before.png
14887.after.png

> RBF: In Router Web UI, Observer Namenode Information displaying as Unavailable
> --
>
> Key: HDFS-14887
> URL: https://issues.apache.org/jira/browse/HDFS-14887
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: 14887.after.png, 14887.before.png, HDFS-14887.001.patch
>
>
> In Router Web UI, Observer Namenode Information displaying as Unavailable.
> We should show a proper icon for them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-12979) StandbyNode should upload FsImage to ObserverNode after checkpointing.

2019-10-02 Thread Chen Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang reopened HDFS-12979:
---

Thanks for the catch [~shv]. I've committed to branch-3.2 and branch-3.1 as 
there were only some imports difference. But branch-2 patch is quite different, 
re-open to post the patch for jenkins run.

> StandbyNode should upload FsImage to ObserverNode after checkpointing.
> --
>
> Key: HDFS-12979
> URL: https://issues.apache.org/jira/browse/HDFS-12979
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Konstantin Shvachko
>Assignee: Chen Liang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-12979-branch-2.001.patch, HDFS-12979.001.patch, 
> HDFS-12979.002.patch, HDFS-12979.003.patch, HDFS-12979.004.patch, 
> HDFS-12979.005.patch, HDFS-12979.006.patch, HDFS-12979.007.patch, 
> HDFS-12979.008.patch, HDFS-12979.009.patch, HDFS-12979.010.patch, 
> HDFS-12979.011.patch, HDFS-12979.012.patch, HDFS-12979.013.patch, 
> HDFS-12979.014.patch, HDFS-12979.015.patch
>
>
> ObserverNode does not create checkpoints. So it's fsimage file can get very 
> old making bootstrap of ObserverNode too long. A StandbyNode should copy 
> latest fsimage to ObserverNode(s) along with ANN.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12979) StandbyNode should upload FsImage to ObserverNode after checkpointing.

2019-10-02 Thread Chen Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12979:
--
Attachment: HDFS-12979-branch-2.001.patch

> StandbyNode should upload FsImage to ObserverNode after checkpointing.
> --
>
> Key: HDFS-12979
> URL: https://issues.apache.org/jira/browse/HDFS-12979
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Konstantin Shvachko
>Assignee: Chen Liang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-12979-branch-2.001.patch, HDFS-12979.001.patch, 
> HDFS-12979.002.patch, HDFS-12979.003.patch, HDFS-12979.004.patch, 
> HDFS-12979.005.patch, HDFS-12979.006.patch, HDFS-12979.007.patch, 
> HDFS-12979.008.patch, HDFS-12979.009.patch, HDFS-12979.010.patch, 
> HDFS-12979.011.patch, HDFS-12979.012.patch, HDFS-12979.013.patch, 
> HDFS-12979.014.patch, HDFS-12979.015.patch
>
>
> ObserverNode does not create checkpoints. So it's fsimage file can get very 
> old making bootstrap of ObserverNode too long. A StandbyNode should copy 
> latest fsimage to ObserverNode(s) along with ANN.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12979) StandbyNode should upload FsImage to ObserverNode after checkpointing.

2019-10-02 Thread Chen Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12979:
--
Status: Patch Available  (was: Reopened)

> StandbyNode should upload FsImage to ObserverNode after checkpointing.
> --
>
> Key: HDFS-12979
> URL: https://issues.apache.org/jira/browse/HDFS-12979
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Konstantin Shvachko
>Assignee: Chen Liang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-12979-branch-2.001.patch, HDFS-12979.001.patch, 
> HDFS-12979.002.patch, HDFS-12979.003.patch, HDFS-12979.004.patch, 
> HDFS-12979.005.patch, HDFS-12979.006.patch, HDFS-12979.007.patch, 
> HDFS-12979.008.patch, HDFS-12979.009.patch, HDFS-12979.010.patch, 
> HDFS-12979.011.patch, HDFS-12979.012.patch, HDFS-12979.013.patch, 
> HDFS-12979.014.patch, HDFS-12979.015.patch
>
>
> ObserverNode does not create checkpoints. So it's fsimage file can get very 
> old making bootstrap of ObserverNode too long. A StandbyNode should copy 
> latest fsimage to ObserverNode(s) along with ANN.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2200) Recon does not handle the NULL snapshot from OM DB cleanly.

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2200?focusedWorklogId=322421=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322421
 ]

ASF GitHub Bot logged work on HDDS-2200:


Author: ASF GitHub Bot
Created on: 03/Oct/19 04:46
Start Date: 03/Oct/19 04:46
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #1577: HDDS-2200 : 
Recon does not handle the NULL snapshot from OM DB cleanly.
URL: https://github.com/apache/hadoop/pull/1577#issuecomment-537784252
 
 
   +1 LGTM
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322421)
Time Spent: 1h  (was: 50m)

> Recon does not handle the NULL snapshot from OM DB cleanly.
> ---
>
> Key: HDDS-2200
> URL: https://issues.apache.org/jira/browse/HDDS-2200
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code}
> 2019-09-27 11:35:19,835 [pool-9-thread-1] ERROR  - Null snapshot location 
> got from OM.
> 2019-09-27 11:35:19,839 [pool-9-thread-1] INFO   - Calling reprocess on 
> Recon tasks.
> 2019-09-27 11:35:19,840 [pool-7-thread-1] INFO   - Starting a 'reprocess' 
> run of ContainerKeyMapperTask.
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Creating new Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609319840
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Cleaning up old Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609258721.
> 2019-09-27 11:35:20,144 [pool-9-thread-1] ERROR  - Unexpected error :
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.reInitializeTasks(ReconTaskControllerImpl.java:181)
> at 
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl.syncDataFromOM(OzoneManagerServiceProviderImpl.java:333)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.recon.tasks.ContainerKeyMapperTask.reprocess(ContainerKeyMapperTask.java:81)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.lambda$reInitializeTasks$3(ReconTaskControllerImpl.java:176)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-10-02 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14216:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> NullPointerException happens in NamenodeWebHdfs
> ---
>
> Key: HDFS-14216
> URL: https://issues.apache.org/jira/browse/HDFS-14216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Critical
> Fix For: 3.3.0, 3.1.4, 3.2.1
>
> Attachments: HDFS-14216.branch-3.1.patch, HDFS-14216_1.patch, 
> HDFS-14216_2.patch, HDFS-14216_3.patch, HDFS-14216_4.patch, 
> HDFS-14216_5.patch, HDFS-14216_6.patch, hadoop-hires-namenode-hadoop11.log
>
>
>  workload
> {code:java}
> curl -i -X PUT -T $HOMEPARH/test.txt 
> "http://hadoop1:9870/webhdfs/v1/input?op=CREATE=hadoop2;
> {code}
> the method
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String
>  excludeDatanodes){
>     HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>for (String host : StringUtils
>  .getTrimmedStringCollection(excludeDatanodes)) {
>  int idx = host.indexOf(":");
>if (idx != -1) { 
> excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr(
>host.substring(0, idx), Integer.parseInt(host.substring(idx + 
> 1;
>} else {
>   
> excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280
>}
>   }
> }
> }
> {code}
> when datanode(e.g.hadoop2) is {color:#d04437}just  wiped before 
> line280{color}, or{color:#33} 
> {color}{color:#ff}we{color}{color:#ff} give the wrong DN 
> name{color}*,*then  bm.getDatanodeManager().getDatanodeByHost(host) will 
> return null, *_excludes_* *containes null*. while *_excludes_* are used 
> later, NPE happens:
> {code:java}
> java.lang.NullPointerException
> at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113)
> at 
> org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-10-02 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14216:
---
Fix Version/s: 3.1.4

> NullPointerException happens in NamenodeWebHdfs
> ---
>
> Key: HDFS-14216
> URL: https://issues.apache.org/jira/browse/HDFS-14216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Critical
> Fix For: 3.3.0, 3.2.1, 3.1.4
>
> Attachments: HDFS-14216.branch-3.1.patch, HDFS-14216_1.patch, 
> HDFS-14216_2.patch, HDFS-14216_3.patch, HDFS-14216_4.patch, 
> HDFS-14216_5.patch, HDFS-14216_6.patch, hadoop-hires-namenode-hadoop11.log
>
>
>  workload
> {code:java}
> curl -i -X PUT -T $HOMEPARH/test.txt 
> "http://hadoop1:9870/webhdfs/v1/input?op=CREATE=hadoop2;
> {code}
> the method
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String
>  excludeDatanodes){
>     HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>for (String host : StringUtils
>  .getTrimmedStringCollection(excludeDatanodes)) {
>  int idx = host.indexOf(":");
>if (idx != -1) { 
> excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr(
>host.substring(0, idx), Integer.parseInt(host.substring(idx + 
> 1;
>} else {
>   
> excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280
>}
>   }
> }
> }
> {code}
> when datanode(e.g.hadoop2) is {color:#d04437}just  wiped before 
> line280{color}, or{color:#33} 
> {color}{color:#ff}we{color}{color:#ff} give the wrong DN 
> name{color}*,*then  bm.getDatanodeManager().getDatanodeByHost(host) will 
> return null, *_excludes_* *containes null*. while *_excludes_* are used 
> later, NPE happens:
> {code:java}
> java.lang.NullPointerException
> at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113)
> at 
> org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-10-02 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943294#comment-16943294
 ] 

Wei-Chiu Chuang commented on HDFS-14216:


Failure doesn't look related. Pushing it to branch-3.1

> NullPointerException happens in NamenodeWebHdfs
> ---
>
> Key: HDFS-14216
> URL: https://issues.apache.org/jira/browse/HDFS-14216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Critical
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HDFS-14216.branch-3.1.patch, HDFS-14216_1.patch, 
> HDFS-14216_2.patch, HDFS-14216_3.patch, HDFS-14216_4.patch, 
> HDFS-14216_5.patch, HDFS-14216_6.patch, hadoop-hires-namenode-hadoop11.log
>
>
>  workload
> {code:java}
> curl -i -X PUT -T $HOMEPARH/test.txt 
> "http://hadoop1:9870/webhdfs/v1/input?op=CREATE=hadoop2;
> {code}
> the method
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String
>  excludeDatanodes){
>     HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>for (String host : StringUtils
>  .getTrimmedStringCollection(excludeDatanodes)) {
>  int idx = host.indexOf(":");
>if (idx != -1) { 
> excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr(
>host.substring(0, idx), Integer.parseInt(host.substring(idx + 
> 1;
>} else {
>   
> excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280
>}
>   }
> }
> }
> {code}
> when datanode(e.g.hadoop2) is {color:#d04437}just  wiped before 
> line280{color}, or{color:#33} 
> {color}{color:#ff}we{color}{color:#ff} give the wrong DN 
> name{color}*,*then  bm.getDatanodeManager().getDatanodeByHost(host) will 
> return null, *_excludes_* *containes null*. while *_excludes_* are used 
> later, NPE happens:
> {code:java}
> java.lang.NullPointerException
> at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113)
> at 
> org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2216) Rename HADOOP_RUNNER_VERSION to OZONE_RUNNER_VERSION in compose .env files

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2216?focusedWorklogId=322398=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322398
 ]

ASF GitHub Bot logged work on HDDS-2216:


Author: ASF GitHub Bot
Created on: 03/Oct/19 01:46
Start Date: 03/Oct/19 01:46
Worklog Time Spent: 10m 
  Work Description: cxorm commented on issue #1570: HDDS-2216. Rename 
HADOOP_RUNNER_VERSION to OZONE_RUNNER_VERSION in co…
URL: https://github.com/apache/hadoop/pull/1570#issuecomment-537751265
 
 
   Thanks @adoroszlai 
   I am going to check unit test 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322398)
Time Spent: 50m  (was: 40m)

> Rename HADOOP_RUNNER_VERSION to OZONE_RUNNER_VERSION in compose .env files
> --
>
> Key: HDDS-2216
> URL: https://issues.apache.org/jira/browse/HDDS-2216
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: docker
>Reporter: Marton Elek
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In HDDS-1698 we replaced our apache/hadoop-runner base image to 
> apache/ozone-runner base image. 
> The version of the image is set by the .env files under the 
> hadoop-ozone/dist/src/main/compose directories
> {code:java}
> cd hadoop-ozone/dist/src/main/compose
> grep -r HADOOP_RUNNER .
> ./ozoneperf/docker-compose.yaml:  image: 
> apache/ozone-runner:${HADOOP_RUNNER_VERSION}
> ./ozoneperf/docker-compose.yaml:  image: 
> apache/ozone-runner:${HADOOP_RUNNER_VERSION}
> ./ozoneperf/docker-compose.yaml:  image: 
> apache/ozone-runner:${HADOOP_RUNNER_VERSION}
>  {code}
> But the name of the variable is HADOOP_RUNNER_VERSION instead of 
> OZONE_RUNNER_VERSION.
> Would be great to rename it the OZONE_RUNNER_VERSION.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1737) Add Volume check in KeyManager and File Operations

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1737?focusedWorklogId=322395=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322395
 ]

ASF GitHub Bot logged work on HDDS-1737:


Author: ASF GitHub Bot
Created on: 03/Oct/19 01:39
Start Date: 03/Oct/19 01:39
Worklog Time Spent: 10m 
  Work Description: cxorm commented on issue #1559: HDDS-1737. Add Volume 
check in KeyManager and File Operations.
URL: https://github.com/apache/hadoop/pull/1559#issuecomment-537750063
 
 
   Yes, I will check it soon.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322395)
Time Spent: 50m  (was: 40m)

> Add Volume check in KeyManager and File Operations
> --
>
> Key: HDDS-1737
> URL: https://issues.apache.org/jira/browse/HDDS-1737
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> This is to address a TODO to check volume checks when performing Key/File 
> operations.
>  
> // TODO: Not checking volume exist here, once we have full cache we can
> // add volume exist check also.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-10-02 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943281#comment-16943281
 ] 

Hadoop QA commented on HDFS-14216:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
25s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} branch-3.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 155 unchanged - 1 fixed = 155 total (was 156) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
|   | hadoop.hdfs.TestMultipleNNPortQOP |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.diskbalancer.TestDiskBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:080e9d0f9b3 |
| JIRA Issue | HDFS-14216 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12982031/HDFS-14216.branch-3.1.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 35488e4a2808 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.1 / ab7ecd6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28008/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Work logged] (HDDS-2217) Remove log4j and audit configuration from the docker-config files

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2217?focusedWorklogId=322385=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322385
 ]

ASF GitHub Bot logged work on HDDS-2217:


Author: ASF GitHub Bot
Created on: 03/Oct/19 00:51
Start Date: 03/Oct/19 00:51
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1579: HDDS-2217 : 
Remove log4j and audit configuration from the docker-config files
URL: https://github.com/apache/hadoop/pull/1579#issuecomment-537740219
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 107 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 40 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 41 | hadoop-ozone in trunk failed. |
   | -1 | compile | 21 | hadoop-hdds in trunk failed. |
   | -1 | compile | 15 | hadoop-ozone in trunk failed. |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 896 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 21 | hadoop-ozone in trunk failed. |
   | -0 | patch | 975 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 36 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 38 | hadoop-ozone in the patch failed. |
   | -1 | compile | 24 | hadoop-hdds in the patch failed. |
   | -1 | compile | 18 | hadoop-ozone in the patch failed. |
   | -1 | javac | 24 | hadoop-hdds in the patch failed. |
   | -1 | javac | 18 | hadoop-ozone in the patch failed. |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 831 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 22 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 32 | hadoop-hdds in the patch failed. |
   | -1 | unit | 30 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 2425 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1579 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient shellcheck shelldocs |
   | uname | Linux 51d47d0280a6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4c24f24 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 

[jira] [Updated] (HDFS-14678) Allow triggerBlockReport to a specific namenode

2019-10-02 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14678:
---
Fix Version/s: 3.2.2
   3.1.4

> Allow triggerBlockReport to a specific namenode
> ---
>
> Key: HDFS-14678
> URL: https://issues.apache.org/jira/browse/HDFS-14678
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.8.2
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
>
> In our largest prod cluster (running 2.8.2) we have >3k hosts. Every time 
> when rolling restarting NNs we will need to wait for block report which takes 
> >2.5 hours for each NN.
> One way to make it faster is to manually trigger a full block report from all 
> datanodes. [HDFS-7278|https://issues.apache.org/jira/browse/HDFS-7278]. 
> However, the current triggerBlockReport command will trigger a block report 
> on all NNs which will flood the active NN as well.
> A quick solution will be adding an option to specify a NN that the manually 
> triggered block report will go to, something like:
> *_hdfs dfsadmin [-triggerBlockReport [-incremental] ] 
> [-namenode] _*
> So when doing a restart of standby NN or observer NN we can trigger an 
> aggressive block report to a specific NN to exit safemode faster without 
> risking active NN performance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1737) Add Volume check in KeyManager and File Operations

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1737?focusedWorklogId=322381=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322381
 ]

ASF GitHub Bot logged work on HDDS-1737:


Author: ASF GitHub Bot
Created on: 03/Oct/19 00:24
Start Date: 03/Oct/19 00:24
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1559: HDDS-1737. Add 
Volume check in KeyManager and File Operations.
URL: https://github.com/apache/hadoop/pull/1559#issuecomment-537734659
 
 
   Can you please check the unit test failures? thanks
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322381)
Time Spent: 40m  (was: 0.5h)

> Add Volume check in KeyManager and File Operations
> --
>
> Key: HDDS-1737
> URL: https://issues.apache.org/jira/browse/HDDS-1737
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This is to address a TODO to check volume checks when performing Key/File 
> operations.
>  
> // TODO: Not checking volume exist here, once we have full cache we can
> // add volume exist check also.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14678) Allow triggerBlockReport to a specific namenode

2019-10-02 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943259#comment-16943259
 ] 

Wei-Chiu Chuang commented on HDFS-14678:


Cherry picking the commit into branch-3.2 and branch-3.1.
There's just a trivial conflict in the test code due to HADOOP-14178. 

> Allow triggerBlockReport to a specific namenode
> ---
>
> Key: HDFS-14678
> URL: https://issues.apache.org/jira/browse/HDFS-14678
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.8.2
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
> Fix For: 3.3.0
>
>
> In our largest prod cluster (running 2.8.2) we have >3k hosts. Every time 
> when rolling restarting NNs we will need to wait for block report which takes 
> >2.5 hours for each NN.
> One way to make it faster is to manually trigger a full block report from all 
> datanodes. [HDFS-7278|https://issues.apache.org/jira/browse/HDFS-7278]. 
> However, the current triggerBlockReport command will trigger a block report 
> on all NNs which will flood the active NN as well.
> A quick solution will be adding an option to specify a NN that the manually 
> triggered block report will go to, something like:
> *_hdfs dfsadmin [-triggerBlockReport [-incremental] ] 
> [-namenode] _*
> So when doing a restart of standby NN or observer NN we can trigger an 
> aggressive block report to a specific NN to exit safemode faster without 
> risking active NN performance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2072) Make StorageContainerLocationProtocolService message based

2019-10-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943257#comment-16943257
 ] 

Hudson commented on HDDS-2072:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17447 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17447/])
HDDS-2072. Make StorageContainerLocationProtocolService message based 
(aengineer: rev 4c24f2434dd8c09bb104ee660975855eca287fe6)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/StorageContainerLocationProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-hdds/common/src/main/proto/StorageContainerLocationProtocol.proto
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
* (edit) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/BaseInsightSubCommand.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/scm/ScmProtocolBlockLocationInsight.java
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/scm/ScmProtocolContainerLocationInsight.java
* (edit) hadoop-hdds/common/src/main/proto/ScmBlockLocationProtocol.proto


> Make StorageContainerLocationProtocolService message based
> --
>
> Key: HDDS-2072
> URL: https://issues.apache.org/jira/browse/HDDS-2072
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> We started to use a generic pattern where we have only one method in the grpc 
> service and the main message contains all the required common information 
> (eg. tracing).
> StorageContainerLocationProtocolService is not yet migrated to this approach. 
> To make our generic debug tool more powerful and unify our protocols I 
> suggest to transform this protocol as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2217) Remove log4j and audit configuration from the docker-config files

2019-10-02 Thread Chris Teoh (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Teoh updated HDDS-2217:
-
Status: Patch Available  (was: In Progress)

> Remove log4j and audit configuration from the docker-config files
> -
>
> Key: HDDS-2217
> URL: https://issues.apache.org/jira/browse/HDDS-2217
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: docker
>Reporter: Marton Elek
>Assignee: Chris Teoh
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Log4j configuration lines are added to the docker-config under 
> hadoop-ozone/dist/src/main/compose/...
> Mainly to make it easier to reconfigure the log level of any components.
> As we already have a "ozone insight" tool which can help us to modify the log 
> level at runtime we don't need these lines any more.
> {code:java}
> LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout
> LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender
> LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
> LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{-MM-dd 
> HH:mm:ss} %-5p %c{1}:%L - %m%n
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.grpc.client.GrpcClientProtocolClient=WARN
> LOG4J.PROPERTIES_log4j.logger.http.requests.s3gateway=INFO,s3gatewayrequestlog
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.Filename=/tmp/jetty-s3gateway-_mm_dd.log
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.RetainDays=3 {code}
> We can remove them together with the audit log entries as we already have a 
> default log4j.propertes / audit log4j2 config.
> After the remove the clusters should be tested: Ozone CLI should not print 
> and confusing log messages (such as NativeLib is missing or anything else). 
> AFAIK they are already turned off in the etc/hadoop/etc log4j.properties.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14858) [SBN read] Allow configurably enable/disable AlignmentContext on NameNode

2019-10-02 Thread Chen Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14858:
--
Fix Version/s: 3.2.2
   3.1.4
   3.3.0
   2.10.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> [SBN read] Allow configurably enable/disable AlignmentContext on NameNode
> -
>
> Key: HDFS-14858
> URL: https://issues.apache.org/jira/browse/HDFS-14858
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Fix For: 2.10.0, 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14858.001.patch, HDFS-14858.002.patch, 
> HDFS-14858.003.patch, HDFS-14858.004.patch
>
>
> As brought up under HDFS-14277, we should make sure SBN read has no 
> performance impact when it is not enabled. One potential overhead of SBN read 
> is maintaining and updating additional state status on NameNode. 
> Specifically, this is done by creating/updating/checking a 
> {{GlobalStateIdContext}} instance. Currently, even without enabling SBN read, 
> this logic is still be checked.  We can make this configurable so that when 
> SBN read is not enabled, there is no such overhead and everything works as-is.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14858) [SBN read] Allow configurably enable/disable AlignmentContext on NameNode

2019-10-02 Thread Chen Liang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943254#comment-16943254
 ] 

Chen Liang commented on HDFS-14858:
---

The two failed tests and unrelated and passed in my local run. I've committed 
to trunk, branch-3.2, branch-3.1 and branch-2. Thanks to all the reviewers!

> [SBN read] Allow configurably enable/disable AlignmentContext on NameNode
> -
>
> Key: HDFS-14858
> URL: https://issues.apache.org/jira/browse/HDFS-14858
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14858.001.patch, HDFS-14858.002.patch, 
> HDFS-14858.003.patch, HDFS-14858.004.patch
>
>
> As brought up under HDFS-14277, we should make sure SBN read has no 
> performance impact when it is not enabled. One potential overhead of SBN read 
> is maintaining and updating additional state status on NameNode. 
> Specifically, this is done by creating/updating/checking a 
> {{GlobalStateIdContext}} instance. Currently, even without enabling SBN read, 
> this logic is still be checked.  We can make this configurable so that when 
> SBN read is not enabled, there is no such overhead and everything works as-is.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2217) Remove log4j and audit configuration from the docker-config files

2019-10-02 Thread Chris Teoh (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2217 started by Chris Teoh.

> Remove log4j and audit configuration from the docker-config files
> -
>
> Key: HDDS-2217
> URL: https://issues.apache.org/jira/browse/HDDS-2217
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: docker
>Reporter: Marton Elek
>Assignee: Chris Teoh
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Log4j configuration lines are added to the docker-config under 
> hadoop-ozone/dist/src/main/compose/...
> Mainly to make it easier to reconfigure the log level of any components.
> As we already have a "ozone insight" tool which can help us to modify the log 
> level at runtime we don't need these lines any more.
> {code:java}
> LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout
> LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender
> LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
> LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{-MM-dd 
> HH:mm:ss} %-5p %c{1}:%L - %m%n
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.grpc.client.GrpcClientProtocolClient=WARN
> LOG4J.PROPERTIES_log4j.logger.http.requests.s3gateway=INFO,s3gatewayrequestlog
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.Filename=/tmp/jetty-s3gateway-_mm_dd.log
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.RetainDays=3 {code}
> We can remove them together with the audit log entries as we already have a 
> default log4j.propertes / audit log4j2 config.
> After the remove the clusters should be tested: Ozone CLI should not print 
> and confusing log messages (such as NativeLib is missing or anything else). 
> AFAIK they are already turned off in the etc/hadoop/etc log4j.properties.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8881) Erasure Coding: internal blocks got missed and got over-replicated at the same time

2019-10-02 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-8881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-8881:
--
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> Erasure Coding: internal blocks got missed and got over-replicated at the 
> same time
> ---
>
> Key: HDFS-8881
> URL: https://issues.apache.org/jira/browse/HDFS-8881
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Major
> Attachments: HDFS-8881.00.patch
>
>
> We know the Repl checking depends on {{BlockManager#countNodes()}}, but 
> countNodes() has limitation for striped blockGroup.
> *One* missing internal block will be catched by Repl checking, and handled by 
> ReplicationMonitor.
> *One* over-replicated internal block will be catched by Repl checking, and 
> handled by processOverReplicatedBlocks.
> *One* missing internal block and *two* over-replicated internal blocks *at 
> the same time* will be catched by Repl checking, and handled by 
> processOverReplicatedBlocks, later by ReplicationMonitor.
> *One* missing internal block and *One* over-replicated internal block *at the 
> same time* will *NOT* be catched by Repl checking.
> "at the same time" means one missing internal block can't be recovered, and 
> one internal block got over-replicated anyway. For example:
> scenario A:
> step 1. block #0 and #1 are reported missing.
> 2. a new #1 got recovered.
> 3. the old #1 come back, and the recovery work for #0 failed.
> scenario B:
> 1. An DN decommissioned/dead which has #1.
> 2. block #0 is reported missing.
> 3. The DN has #1 recommisioned, and the recovery work for #0 failed.
> In the end, the blockGroup has \[1, 1, 2, 3, 4, 5, 6, 7, 8\], assume 6+3 
> schema. Client always needs to decode #0 if the blockGroup doesn't get 
> handled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2072) Make StorageContainerLocationProtocolService message based

2019-10-02 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2072:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to the trunk.

> Make StorageContainerLocationProtocolService message based
> --
>
> Key: HDDS-2072
> URL: https://issues.apache.org/jira/browse/HDDS-2072
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> We started to use a generic pattern where we have only one method in the grpc 
> service and the main message contains all the required common information 
> (eg. tracing).
> StorageContainerLocationProtocolService is not yet migrated to this approach. 
> To make our generic debug tool more powerful and unify our protocols I 
> suggest to transform this protocol as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2072) Make StorageContainerLocationProtocolService message based

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2072?focusedWorklogId=322376=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322376
 ]

ASF GitHub Bot logged work on HDDS-2072:


Author: ASF GitHub Bot
Created on: 03/Oct/19 00:01
Start Date: 03/Oct/19 00:01
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1514: HDDS-2072. Make 
StorageContainerLocationProtocolService message based
URL: https://github.com/apache/hadoop/pull/1514#issuecomment-537729587
 
 
   I have rebased and committed this change. Thank you for the contribution.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322376)
Time Spent: 1h 20m  (was: 1h 10m)

> Make StorageContainerLocationProtocolService message based
> --
>
> Key: HDDS-2072
> URL: https://issues.apache.org/jira/browse/HDDS-2072
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> We started to use a generic pattern where we have only one method in the grpc 
> service and the main message contains all the required common information 
> (eg. tracing).
> StorageContainerLocationProtocolService is not yet migrated to this approach. 
> To make our generic debug tool more powerful and unify our protocols I 
> suggest to transform this protocol as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2072) Make StorageContainerLocationProtocolService message based

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2072?focusedWorklogId=322377=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322377
 ]

ASF GitHub Bot logged work on HDDS-2072:


Author: ASF GitHub Bot
Created on: 03/Oct/19 00:01
Start Date: 03/Oct/19 00:01
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1514: HDDS-2072. 
Make StorageContainerLocationProtocolService message based
URL: https://github.com/apache/hadoop/pull/1514
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322377)
Time Spent: 1.5h  (was: 1h 20m)

> Make StorageContainerLocationProtocolService message based
> --
>
> Key: HDDS-2072
> URL: https://issues.apache.org/jira/browse/HDDS-2072
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> We started to use a generic pattern where we have only one method in the grpc 
> service and the main message contains all the required common information 
> (eg. tracing).
> StorageContainerLocationProtocolService is not yet migrated to this approach. 
> To make our generic debug tool more powerful and unify our protocols I 
> suggest to transform this protocol as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14754) Erasure Coding : The number of Under-Replicated Blocks never reduced

2019-10-02 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943251#comment-16943251
 ] 

Hadoop QA commented on HDFS-14754:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 9s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} branch-3.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 57s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}153m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestEditLogRace |
|   | hadoop.hdfs.server.diskbalancer.TestDiskBalancer |
|   | hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:080e9d0f9b3 |
| JIRA Issue | HDFS-14754 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12981986/HDFS-14754.branch-3.1.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ff8792f24e7f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.1 / 122b02e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28007/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28007/testReport/ |
| Max. process+thread count | 3104 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Commented] (HDFS-14858) [SBN read] Allow configurably enable/disable AlignmentContext on NameNode

2019-10-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943250#comment-16943250
 ] 

Hudson commented on HDFS-14858:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17446 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17446/])
HDFS-14858. [SBN read] Allow configurably enable/disable (cliang: rev 
1303255aee75e5109433f937592a890e8d274ce2)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestMultiObserverNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestConsistentReadsObserver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestStateAlignmentContextWithHA.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ObserverNameNode.md
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestObserverNode.java


> [SBN read] Allow configurably enable/disable AlignmentContext on NameNode
> -
>
> Key: HDFS-14858
> URL: https://issues.apache.org/jira/browse/HDFS-14858
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14858.001.patch, HDFS-14858.002.patch, 
> HDFS-14858.003.patch, HDFS-14858.004.patch
>
>
> As brought up under HDFS-14277, we should make sure SBN read has no 
> performance impact when it is not enabled. One potential overhead of SBN read 
> is maintaining and updating additional state status on NameNode. 
> Specifically, this is done by creating/updating/checking a 
> {{GlobalStateIdContext}} instance. Currently, even without enabling SBN read, 
> this logic is still be checked.  We can make this configurable so that when 
> SBN read is not enabled, there is no such overhead and everything works as-is.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2217) Remove log4j and audit configuration from the docker-config files

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2217?focusedWorklogId=322369=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322369
 ]

ASF GitHub Bot logged work on HDDS-2217:


Author: ASF GitHub Bot
Created on: 02/Oct/19 23:49
Start Date: 02/Oct/19 23:49
Worklog Time Spent: 10m 
  Work Description: christeoh commented on pull request #1579: HDDS-2217 : 
Remove log4j and audit configuration from the docker-config files
URL: https://github.com/apache/hadoop/pull/1579
 
 
   Removed redundant and potentially confusing LOG4J entries.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322369)
Remaining Estimate: 0h
Time Spent: 10m

> Remove log4j and audit configuration from the docker-config files
> -
>
> Key: HDDS-2217
> URL: https://issues.apache.org/jira/browse/HDDS-2217
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: docker
>Reporter: Marton Elek
>Assignee: Chris Teoh
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Log4j configuration lines are added to the docker-config under 
> hadoop-ozone/dist/src/main/compose/...
> Mainly to make it easier to reconfigure the log level of any components.
> As we already have a "ozone insight" tool which can help us to modify the log 
> level at runtime we don't need these lines any more.
> {code:java}
> LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout
> LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender
> LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
> LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{-MM-dd 
> HH:mm:ss} %-5p %c{1}:%L - %m%n
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.grpc.client.GrpcClientProtocolClient=WARN
> LOG4J.PROPERTIES_log4j.logger.http.requests.s3gateway=INFO,s3gatewayrequestlog
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.Filename=/tmp/jetty-s3gateway-_mm_dd.log
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.RetainDays=3 {code}
> We can remove them together with the audit log entries as we already have a 
> default log4j.propertes / audit log4j2 config.
> After the remove the clusters should be tested: Ozone CLI should not print 
> and confusing log messages (such as NativeLib is missing or anything else). 
> AFAIK they are already turned off in the etc/hadoop/etc log4j.properties.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2217) Remove log4j and audit configuration from the docker-config files

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2217:
-
Labels: newbie pull-request-available  (was: newbie)

> Remove log4j and audit configuration from the docker-config files
> -
>
> Key: HDDS-2217
> URL: https://issues.apache.org/jira/browse/HDDS-2217
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: docker
>Reporter: Marton Elek
>Assignee: Chris Teoh
>Priority: Major
>  Labels: newbie, pull-request-available
>
> Log4j configuration lines are added to the docker-config under 
> hadoop-ozone/dist/src/main/compose/...
> Mainly to make it easier to reconfigure the log level of any components.
> As we already have a "ozone insight" tool which can help us to modify the log 
> level at runtime we don't need these lines any more.
> {code:java}
> LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout
> LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender
> LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
> LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{-MM-dd 
> HH:mm:ss} %-5p %c{1}:%L - %m%n
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.grpc.client.GrpcClientProtocolClient=WARN
> LOG4J.PROPERTIES_log4j.logger.http.requests.s3gateway=INFO,s3gatewayrequestlog
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.Filename=/tmp/jetty-s3gateway-_mm_dd.log
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.RetainDays=3 {code}
> We can remove them together with the audit log entries as we already have a 
> default log4j.propertes / audit log4j2 config.
> After the remove the clusters should be tested: Ozone CLI should not print 
> and confusing log messages (such as NativeLib is missing or anything else). 
> AFAIK they are already turned off in the etc/hadoop/etc log4j.properties.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14523) Remove excess read lock for NetworkToplogy

2019-10-02 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14523:
---
Fix Version/s: 3.2.2
   3.1.4

> Remove excess read lock for NetworkToplogy
> --
>
> Key: HDFS-14523
> URL: https://issues.apache.org/jira/browse/HDFS-14523
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wu Weiwei
>Assignee: Wu Weiwei
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14523.1.patch
>
>
> getNumOfRacks() and getNumOfLeaves() are two high frequencies call methods 
> for BlockPlacementPolicy, this two methods need to get NetworkTopology read 
> lock, and get lock in high frequencies call methods may impact the namenode 
> performance. 
> This two methods get number of racks and number of leaves just for 
> chooseTarget calculate,  lock in these two methods cannot guarantee these two 
> values will not change in the subsequent calculations.
> I think it's safe to remove the read lock from this two methods.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2222) Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-?focusedWorklogId=322357=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322357
 ]

ASF GitHub Bot logged work on HDDS-:


Author: ASF GitHub Bot
Created on: 02/Oct/19 23:23
Start Date: 02/Oct/19 23:23
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1578: HDDS- Add a 
method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C
URL: https://github.com/apache/hadoop/pull/1578#issuecomment-537721939
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 31 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 36 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-hdds in trunk failed. |
   | -1 | compile | 16 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 56 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 960 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 19 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1055 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 30 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 21 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 35 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 37 | hadoop-ozone in the patch failed. |
   | -1 | compile | 26 | hadoop-hdds in the patch failed. |
   | -1 | compile | 18 | hadoop-ozone in the patch failed. |
   | -1 | javac | 26 | hadoop-hdds in the patch failed. |
   | -1 | javac | 18 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 28 | hadoop-hdds: The patch generated 10 new + 0 
unchanged - 0 fixed = 10 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 749 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 18 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 32 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 19 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 28 | hadoop-hdds in the patch failed. |
   | -1 | unit | 26 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 2479 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1578 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d12771a51d3b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b09d389 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 

[jira] [Work logged] (HDDS-2200) Recon does not handle the NULL snapshot from OM DB cleanly.

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2200?focusedWorklogId=322354=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322354
 ]

ASF GitHub Bot logged work on HDDS-2200:


Author: ASF GitHub Bot
Created on: 02/Oct/19 23:18
Start Date: 02/Oct/19 23:18
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1577: HDDS-2200 : 
Recon does not handle the NULL snapshot from OM DB cleanly.
URL: https://github.com/apache/hadoop/pull/1577#issuecomment-537720953
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 75 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 28 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 32 | hadoop-ozone in trunk failed. |
   | -1 | compile | 19 | hadoop-hdds in trunk failed. |
   | -1 | compile | 13 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 46 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 920 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 17 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1007 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 29 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 17 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 31 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 34 | hadoop-ozone in the patch failed. |
   | -1 | compile | 21 | hadoop-hdds in the patch failed. |
   | -1 | compile | 16 | hadoop-ozone in the patch failed. |
   | -1 | javac | 21 | hadoop-hdds in the patch failed. |
   | -1 | javac | 16 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 52 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 793 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 16 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 28 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 17 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 24 | hadoop-hdds in the patch failed. |
   | -1 | unit | 23 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 2448 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1577 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 50632eedcc48 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 53ed78b |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 

[jira] [Commented] (HDFS-14660) [SBN Read] ObserverNameNode should throw StandbyException for requests not from ObserverProxyProvider

2019-10-02 Thread Sun Chao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943233#comment-16943233
 ] 

Sun Chao commented on HDFS-14660:
-

[~shv] yes I believe so.

> [SBN Read] ObserverNameNode should throw StandbyException for requests not 
> from ObserverProxyProvider
> -
>
> Key: HDFS-14660
> URL: https://issues.apache.org/jira/browse/HDFS-14660
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14660.000.patch, HDFS-14660.001.patch, 
> HDFS-14660.002.patch, HDFS-14660.003.patch, HDFS-14660.004.patch
>
>
> In a HDFS HA cluster with consistent reads enabled (HDFS-12943), clients 
> could be using either {{ObserverReadProxyProvider}}, 
> {{ConfiguredProxyProvider}}, or something else. Since observer is just a 
> special type of SBN and we allow transitions between them, a client NOT using 
> {{ObserverReadProxyProvider}} will need to have 
> {{dfs.ha.namenodes.}} include all NameNodes in the cluster, and 
> therefore, it may send request to a observer node.
> For this case, we should check whether the {{stateId}} in the incoming RPC 
> header is set or not, and throw an {{StandbyException}} when it is not. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14660) [SBN Read] ObserverNameNode should throw StandbyException for requests not from ObserverProxyProvider

2019-10-02 Thread Konstantin Shvachko (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943232#comment-16943232
 ] 

Konstantin Shvachko commented on HDFS-14660:


This can be now committed to branch-2, [~csun]?

> [SBN Read] ObserverNameNode should throw StandbyException for requests not 
> from ObserverProxyProvider
> -
>
> Key: HDFS-14660
> URL: https://issues.apache.org/jira/browse/HDFS-14660
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14660.000.patch, HDFS-14660.001.patch, 
> HDFS-14660.002.patch, HDFS-14660.003.patch, HDFS-14660.004.patch
>
>
> In a HDFS HA cluster with consistent reads enabled (HDFS-12943), clients 
> could be using either {{ObserverReadProxyProvider}}, 
> {{ConfiguredProxyProvider}}, or something else. Since observer is just a 
> special type of SBN and we allow transitions between them, a client NOT using 
> {{ObserverReadProxyProvider}} will need to have 
> {{dfs.ha.namenodes.}} include all NameNodes in the cluster, and 
> therefore, it may send request to a observer node.
> For this case, we should check whether the {{stateId}} in the incoming RPC 
> header is set or not, and throw an {{StandbyException}} when it is not. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2228) Fix NPE in OzoneDelegationTokenManager#addPersistedDelegationToken

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2228?focusedWorklogId=322347=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322347
 ]

ASF GitHub Bot logged work on HDDS-2228:


Author: ASF GitHub Bot
Created on: 02/Oct/19 23:05
Start Date: 02/Oct/19 23:05
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1571: HDDS-2228. Fix 
NPE in OzoneDelegationTokenManager#addPersistedDelegat…
URL: https://github.com/apache/hadoop/pull/1571#issuecomment-537717775
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 88 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for branch |
   | -1 | mvninstall | 45 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 43 | hadoop-ozone in trunk failed. |
   | -1 | compile | 20 | hadoop-hdds in trunk failed. |
   | -1 | compile | 13 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 53 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 989 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 19 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1085 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 33 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 19 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 17 | Maven dependency ordering for patch |
   | -1 | mvninstall | 35 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 38 | hadoop-ozone in the patch failed. |
   | -1 | compile | 23 | hadoop-hdds in the patch failed. |
   | -1 | compile | 18 | hadoop-ozone in the patch failed. |
   | -1 | javac | 23 | hadoop-hdds in the patch failed. |
   | -1 | javac | 18 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 57 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 804 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 20 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 17 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 30 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 17 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 24 | hadoop-hdds in the patch failed. |
   | -1 | unit | 23 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 2640 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1571/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1571 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 15bb956d4ec8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 53ed78b |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1571/2/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1571/2/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1571/2/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1571/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1571/2/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1571/2/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1571/2/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1571/2/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1571/2/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 

[jira] [Commented] (HDDS-2019) Handle Set DtService of token in S3Gateway for OM HA

2019-10-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943219#comment-16943219
 ] 

Hudson commented on HDDS-2019:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17444 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17444/])
HDDS-2019. Handle Set DtService of token in S3Gateway for OM HA. (#1489) 
(github: rev b09d389001d95eedb7ec17c6f890e0ea3baace9d)
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/OzoneS3Util.java
* (add) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/util/TestOzoneS3Util.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneClientProducer.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneServiceProvider.java


> Handle Set DtService of token in S3Gateway for OM HA
> 
>
> Key: HDDS-2019
> URL: https://issues.apache.org/jira/browse/HDDS-2019
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> When OM HA is enabled, when tokens are generated, the service name should be 
> set with address of all OM's.
>  
> Current without HA, it is set with Om RpcAddress string. This Jira is to 
> handle:
>  # Set dtService with all OM address. Right now in OMClientProducer, UGI is 
> created with S3 token, and serviceName of token is set with OMAddress, for HA 
> case, this should be set with all OM RPC addresses.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2019) Handle Set DtService of token in S3Gateway for OM HA

2019-10-02 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2019:
-
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Handle Set DtService of token in S3Gateway for OM HA
> 
>
> Key: HDDS-2019
> URL: https://issues.apache.org/jira/browse/HDDS-2019
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> When OM HA is enabled, when tokens are generated, the service name should be 
> set with address of all OM's.
>  
> Current without HA, it is set with Om RpcAddress string. This Jira is to 
> handle:
>  # Set dtService with all OM address. Right now in OMClientProducer, UGI is 
> created with S3 token, and serviceName of token is set with OMAddress, for HA 
> case, this should be set with all OM RPC addresses.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2019) Handle Set DtService of token in S3Gateway for OM HA

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2019?focusedWorklogId=322331=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322331
 ]

ASF GitHub Bot logged work on HDDS-2019:


Author: ASF GitHub Bot
Created on: 02/Oct/19 22:41
Start Date: 02/Oct/19 22:41
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1489: 
HDDS-2019. Handle Set DtService of token in S3Gateway for OM HA.
URL: https://github.com/apache/hadoop/pull/1489
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322331)
Time Spent: 5h 40m  (was: 5.5h)

> Handle Set DtService of token in S3Gateway for OM HA
> 
>
> Key: HDDS-2019
> URL: https://issues.apache.org/jira/browse/HDDS-2019
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> When OM HA is enabled, when tokens are generated, the service name should be 
> set with address of all OM's.
>  
> Current without HA, it is set with Om RpcAddress string. This Jira is to 
> handle:
>  # Set dtService with all OM address. Right now in OMClientProducer, UGI is 
> created with S3 token, and serviceName of token is set with OMAddress, for HA 
> case, this should be set with all OM RPC addresses.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2019) Handle Set DtService of token in S3Gateway for OM HA

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2019?focusedWorklogId=322330=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322330
 ]

ASF GitHub Bot logged work on HDDS-2019:


Author: ASF GitHub Bot
Created on: 02/Oct/19 22:41
Start Date: 02/Oct/19 22:41
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1489: HDDS-2019. 
Handle Set DtService of token in S3Gateway for OM HA.
URL: https://github.com/apache/hadoop/pull/1489#issuecomment-537711587
 
 
   Thank You @xiaoyuyao  for the review.
   I will commit this to the trunk.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322330)
Time Spent: 5.5h  (was: 5h 20m)

> Handle Set DtService of token in S3Gateway for OM HA
> 
>
> Key: HDDS-2019
> URL: https://issues.apache.org/jira/browse/HDDS-2019
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> When OM HA is enabled, when tokens are generated, the service name should be 
> set with address of all OM's.
>  
> Current without HA, it is set with Om RpcAddress string. This Jira is to 
> handle:
>  # Set dtService with all OM address. Right now in OMClientProducer, UGI is 
> created with S3 token, and serviceName of token is set with OMAddress, for HA 
> case, this should be set with all OM RPC addresses.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2222) Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-:
-
Labels: pull-request-available  (was: )

> Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C
> -
>
> Key: HDDS-
> URL: https://issues.apache.org/jira/browse/HDDS-
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: pull-request-available
> Attachments: o_20191001.patch, o_20191002.patch
>
>
> PureJavaCrc32 and PureJavaCrc32C implement java.util.zip.Checksum which 
> provides only methods to update byte and byte[].  We propose to add a method 
> to update ByteBuffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2222) Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-?focusedWorklogId=322327=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322327
 ]

ASF GitHub Bot logged work on HDDS-:


Author: ASF GitHub Bot
Created on: 02/Oct/19 22:40
Start Date: 02/Oct/19 22:40
Worklog Time Spent: 10m 
  Work Description: szetszwo commented on pull request #1578: HDDS- Add 
a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C
URL: https://github.com/apache/hadoop/pull/1578
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322327)
Remaining Estimate: 0h
Time Spent: 10m

> Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C
> -
>
> Key: HDDS-
> URL: https://issues.apache.org/jira/browse/HDDS-
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: pull-request-available
> Attachments: o_20191001.patch, o_20191002.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> PureJavaCrc32 and PureJavaCrc32C implement java.util.zip.Checksum which 
> provides only methods to update byte and byte[].  We propose to add a method 
> to update ByteBuffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2224) Fix loadup cache for cache cleanup policy NEVER

2019-10-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943212#comment-16943212
 ] 

Hudson commented on HDDS-2224:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17443 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17443/])
HDDS-2224. Fix loadup cache for cache cleanup policy NEVER. (#1567) (github: 
rev 53ed78bcdb716d0351a934ac18661ef9fa6a03d4)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/TableCacheImpl.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/TypedTable.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/TableCache.java


> Fix loadup cache for cache cleanup policy NEVER
> ---
>
> Key: HDDS-2224
> URL: https://issues.apache.org/jira/browse/HDDS-2224
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> During initial startup/restart of OM, if table has cache cleanup policy set 
> to NEVER, we fill the table cache and also epochEntries. We do not need to 
> add entries to epochEntries, as the epochEntries is used for eviction from 
> the cache, once double buffer flushes to disk.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2019) Handle Set DtService of token in S3Gateway for OM HA

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2019?focusedWorklogId=322322=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322322
 ]

ASF GitHub Bot logged work on HDDS-2019:


Author: ASF GitHub Bot
Created on: 02/Oct/19 22:36
Start Date: 02/Oct/19 22:36
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #1489: HDDS-2019. Handle 
Set DtService of token in S3Gateway for OM HA.
URL: https://github.com/apache/hadoop/pull/1489#issuecomment-537710142
 
 
   LGTM, +1.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322322)
Time Spent: 5h 20m  (was: 5h 10m)

> Handle Set DtService of token in S3Gateway for OM HA
> 
>
> Key: HDDS-2019
> URL: https://issues.apache.org/jira/browse/HDDS-2019
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> When OM HA is enabled, when tokens are generated, the service name should be 
> set with address of all OM's.
>  
> Current without HA, it is set with Om RpcAddress string. This Jira is to 
> handle:
>  # Set dtService with all OM address. Right now in OMClientProducer, UGI is 
> created with S3 token, and serviceName of token is set with OMAddress, for HA 
> case, this should be set with all OM RPC addresses.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2230) Invalid entries in ozonesecure-mr config

2019-10-02 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943210#comment-16943210
 ] 

Xiaoyu Yao commented on HDDS-2230:
--

This is similar to an issue that I'm investigating. Fix the docker config 
allows me to repro it locally. Thanks [~adoroszlai]

> Invalid entries in ozonesecure-mr config
> 
>
> Key: HDDS-2230
> URL: https://issues.apache.org/jira/browse/HDDS-2230
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
> Attachments: HDDS-2230.001.patch
>
>
> Some of the entries in {{ozonesecure-mr/docker-config}} are in invalid 
> format, thus they end up missing from the generated config files.
> {noformat}
> $ cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozonesecure-mr
> $ ./test.sh # configs are generated during container startup
> $ cd ../..
> $ grep -c 'ozone.administrators' compose/ozonesecure-mr/docker-config
> 1
> $ grep -c 'ozone.administrators' etc/hadoop/ozone-site.xml
> 0
> $ grep -c 'yarn.timeline-service' compose/ozonesecure-mr/docker-config
> 5
> $ grep -c 'yarn.timeline-service' etc/hadoop/yarn-site.xml
> 2
> $ grep -c 'container-executor' compose/ozonesecure-mr/docker-config
> 3
> $ grep -c 'container-executor' etc/hadoop/yarn-site.xml
> 0
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2162) Make OM Generic related configuration support HA style config

2019-10-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943207#comment-16943207
 ] 

Hudson commented on HDDS-2162:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17442 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17442/])
HDDS-2162. Make OM Generic related configuration support HA style (github: rev 
169cef758dcbe7021d44765b4c18f3ed50eb5a03)
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ha/OMHANodeDetails.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerConfiguration.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ha/OMNodeDetails.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ha/package-info.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerRatisServer.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/snapshot/OzoneManagerSnapshotProvider.java
* (delete) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMNodeDetails.java
* (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java


> Make OM Generic related configuration support HA style config
> -
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> -OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,-
> -OZONE_OM_KERBEROS_PRINCIPAL_KEY,-
> -OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,-
> -OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.-
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2224) Fix loadup cache for cache cleanup policy NEVER

2019-10-02 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2224:
-
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Fix loadup cache for cache cleanup policy NEVER
> ---
>
> Key: HDDS-2224
> URL: https://issues.apache.org/jira/browse/HDDS-2224
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> During initial startup/restart of OM, if table has cache cleanup policy set 
> to NEVER, we fill the table cache and also epochEntries. We do not need to 
> add entries to epochEntries, as the epochEntries is used for eviction from 
> the cache, once double buffer flushes to disk.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2224) Fix loadup cache for cache cleanup policy NEVER

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2224?focusedWorklogId=322313=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322313
 ]

ASF GitHub Bot logged work on HDDS-2224:


Author: ASF GitHub Bot
Created on: 02/Oct/19 22:18
Start Date: 02/Oct/19 22:18
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1567: HDDS-2224. Fix 
loadup cache for cache cleanup policy NEVER.
URL: https://github.com/apache/hadoop/pull/1567#issuecomment-537705435
 
 
   Thank You @arp7 for the review.
   Test failures are not related to this patch.
   I will commit this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322313)
Time Spent: 40m  (was: 0.5h)

> Fix loadup cache for cache cleanup policy NEVER
> ---
>
> Key: HDDS-2224
> URL: https://issues.apache.org/jira/browse/HDDS-2224
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> During initial startup/restart of OM, if table has cache cleanup policy set 
> to NEVER, we fill the table cache and also epochEntries. We do not need to 
> add entries to epochEntries, as the epochEntries is used for eviction from 
> the cache, once double buffer flushes to disk.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2224) Fix loadup cache for cache cleanup policy NEVER

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2224?focusedWorklogId=322312=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322312
 ]

ASF GitHub Bot logged work on HDDS-2224:


Author: ASF GitHub Bot
Created on: 02/Oct/19 22:18
Start Date: 02/Oct/19 22:18
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1567: HDDS-2224. Fix 
loadup cache for cache cleanup policy NEVER.
URL: https://github.com/apache/hadoop/pull/1567#issuecomment-537705435
 
 
   Thank You @arp7 for the review.
   I will commit this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322312)
Time Spent: 0.5h  (was: 20m)

> Fix loadup cache for cache cleanup policy NEVER
> ---
>
> Key: HDDS-2224
> URL: https://issues.apache.org/jira/browse/HDDS-2224
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> During initial startup/restart of OM, if table has cache cleanup policy set 
> to NEVER, we fill the table cache and also epochEntries. We do not need to 
> add entries to epochEntries, as the epochEntries is used for eviction from 
> the cache, once double buffer flushes to disk.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2224) Fix loadup cache for cache cleanup policy NEVER

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2224?focusedWorklogId=322314=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322314
 ]

ASF GitHub Bot logged work on HDDS-2224:


Author: ASF GitHub Bot
Created on: 02/Oct/19 22:18
Start Date: 02/Oct/19 22:18
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1567: 
HDDS-2224. Fix loadup cache for cache cleanup policy NEVER.
URL: https://github.com/apache/hadoop/pull/1567
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322314)
Time Spent: 50m  (was: 40m)

> Fix loadup cache for cache cleanup policy NEVER
> ---
>
> Key: HDDS-2224
> URL: https://issues.apache.org/jira/browse/HDDS-2224
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> During initial startup/restart of OM, if table has cache cleanup policy set 
> to NEVER, we fill the table cache and also epochEntries. We do not need to 
> add entries to epochEntries, as the epochEntries is used for eviction from 
> the cache, once double buffer flushes to disk.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2162) Make OM Generic related configuration support HA style config

2019-10-02 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2162:
-
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Make OM Generic related configuration support HA style config
> -
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> -OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,-
> -OZONE_OM_KERBEROS_PRINCIPAL_KEY,-
> -OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,-
> -OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.-
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make OM Generic related configuration support HA style config

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=322307=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322307
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 02/Oct/19 22:09
Start Date: 02/Oct/19 22:09
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1511: HDDS-2162. 
Make OM Generic related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#issuecomment-537702879
 
 
   Test failures are not related to this patch.
   Thank You @arp7 and @anuengineer for the review.
   I will commit this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322307)
Time Spent: 7h 20m  (was: 7h 10m)

> Make OM Generic related configuration support HA style config
> -
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 7h 20m
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> -OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,-
> -OZONE_OM_KERBEROS_PRINCIPAL_KEY,-
> -OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,-
> -OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.-
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make OM Generic related configuration support HA style config

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=322308=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322308
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 02/Oct/19 22:09
Start Date: 02/Oct/19 22:09
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1511: 
HDDS-2162. Make OM Generic related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322308)
Time Spent: 7.5h  (was: 7h 20m)

> Make OM Generic related configuration support HA style config
> -
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> -OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,-
> -OZONE_OM_KERBEROS_PRINCIPAL_KEY,-
> -OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,-
> -OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.-
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14618) Incorrect synchronization of ArrayList field (ArrayList is thread-unsafe).

2019-10-02 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14618:
---
Fix Version/s: 3.2.2
   3.1.4

> Incorrect synchronization of ArrayList field (ArrayList is thread-unsafe).
> --
>
> Key: HDFS-14618
> URL: https://issues.apache.org/jira/browse/HDFS-14618
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Paul Ward
>Assignee: Paul Ward
>Priority: Critical
>  Labels: fix-provided, patch-available
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: race.patch
>
>
> I submitted a  CR for this issue at:
> https://github.com/apache/hadoop/pull/1030
> The field {{timedOutItems}}  (an {{ArrayList}}, i.e., not thread safe):
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L70
> is protected by synchronization on itself ({{timedOutItems}}):
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L167-L168
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L267-L268
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L178
> However, in one place:
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java#L133-L135
> it is (trying to be) protected by synchronized using 
> {{pendingReconstructions}} --- but this cannot protect {{timedOutItems}}.
> Synchronized on different objects does not ensure mutual exclusion with the 
> other locations.
> I.e., 2 code locations, one synchronized by {{pendingReconstructions}} and 
> the other by {{timedOutItems}} can still executed concurrently.
> This CR adds the synchronized on {{timedOutItems}}.
> Note that this CR keeps the synchronized on {{pendingReconstructions}}, which 
> is needed for a different purpose (protect {{pendingReconstructions}})



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make OM Generic related configuration support HA style config

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=322305=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322305
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 02/Oct/19 22:07
Start Date: 02/Oct/19 22:07
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1511: HDDS-2162. Make 
OM Generic related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#issuecomment-537702251
 
 
   I agree, once you do the standard sanity checks; I think we should go ahead 
and commit. Thank you for working on this.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322305)
Time Spent: 7h 10m  (was: 7h)

> Make OM Generic related configuration support HA style config
> -
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> -OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,-
> -OZONE_OM_KERBEROS_PRINCIPAL_KEY,-
> -OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,-
> -OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.-
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2226) S3 Secrets should use a strong RNG

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2226?focusedWorklogId=322302=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322302
 ]

ASF GitHub Bot logged work on HDDS-2226:


Author: ASF GitHub Bot
Created on: 02/Oct/19 22:04
Start Date: 02/Oct/19 22:04
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1572: HDDS-2226. S3 Secrets 
should use a strong RNG.
URL: https://github.com/apache/hadoop/pull/1572#issuecomment-537701365
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322302)
Time Spent: 1h  (was: 50m)

> S3 Secrets should use a strong RNG
> --
>
> Key: HDDS-2226
> URL: https://issues.apache.org/jira/browse/HDDS-2226
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: S3
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The S3 token generation under ozone should use a strong RNG. 
> I want to thank Jonathan Leitschuh, for originally noticing this issue and 
> reporting it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12979) StandbyNode should upload FsImage to ObserverNode after checkpointing.

2019-10-02 Thread Konstantin Shvachko (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943191#comment-16943191
 ] 

Konstantin Shvachko commented on HDFS-12979:


We should backport this to other branches as well, [~vagarychen].

> StandbyNode should upload FsImage to ObserverNode after checkpointing.
> --
>
> Key: HDFS-12979
> URL: https://issues.apache.org/jira/browse/HDFS-12979
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Konstantin Shvachko
>Assignee: Chen Liang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-12979.001.patch, HDFS-12979.002.patch, 
> HDFS-12979.003.patch, HDFS-12979.004.patch, HDFS-12979.005.patch, 
> HDFS-12979.006.patch, HDFS-12979.007.patch, HDFS-12979.008.patch, 
> HDFS-12979.009.patch, HDFS-12979.010.patch, HDFS-12979.011.patch, 
> HDFS-12979.012.patch, HDFS-12979.013.patch, HDFS-12979.014.patch, 
> HDFS-12979.015.patch
>
>
> ObserverNode does not create checkpoints. So it's fsimage file can get very 
> old making bootstrap of ObserverNode too long. A StandbyNode should copy 
> latest fsimage to ObserverNode(s) along with ANN.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-10-02 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14216:
---
Attachment: HDFS-14216.branch-3.1.patch

> NullPointerException happens in NamenodeWebHdfs
> ---
>
> Key: HDFS-14216
> URL: https://issues.apache.org/jira/browse/HDFS-14216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Critical
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HDFS-14216.branch-3.1.patch, HDFS-14216_1.patch, 
> HDFS-14216_2.patch, HDFS-14216_3.patch, HDFS-14216_4.patch, 
> HDFS-14216_5.patch, HDFS-14216_6.patch, hadoop-hires-namenode-hadoop11.log
>
>
>  workload
> {code:java}
> curl -i -X PUT -T $HOMEPARH/test.txt 
> "http://hadoop1:9870/webhdfs/v1/input?op=CREATE=hadoop2;
> {code}
> the method
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String
>  excludeDatanodes){
>     HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>for (String host : StringUtils
>  .getTrimmedStringCollection(excludeDatanodes)) {
>  int idx = host.indexOf(":");
>if (idx != -1) { 
> excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr(
>host.substring(0, idx), Integer.parseInt(host.substring(idx + 
> 1;
>} else {
>   
> excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280
>}
>   }
> }
> }
> {code}
> when datanode(e.g.hadoop2) is {color:#d04437}just  wiped before 
> line280{color}, or{color:#33} 
> {color}{color:#ff}we{color}{color:#ff} give the wrong DN 
> name{color}*,*then  bm.getDatanodeManager().getDatanodeByHost(host) will 
> return null, *_excludes_* *containes null*. while *_excludes_* are used 
> later, NPE happens:
> {code:java}
> java.lang.NullPointerException
> at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113)
> at 
> org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-10-02 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14216:
---
Status: Patch Available  (was: Reopened)

> NullPointerException happens in NamenodeWebHdfs
> ---
>
> Key: HDFS-14216
> URL: https://issues.apache.org/jira/browse/HDFS-14216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Critical
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HDFS-14216.branch-3.1.patch, HDFS-14216_1.patch, 
> HDFS-14216_2.patch, HDFS-14216_3.patch, HDFS-14216_4.patch, 
> HDFS-14216_5.patch, HDFS-14216_6.patch, hadoop-hires-namenode-hadoop11.log
>
>
>  workload
> {code:java}
> curl -i -X PUT -T $HOMEPARH/test.txt 
> "http://hadoop1:9870/webhdfs/v1/input?op=CREATE=hadoop2;
> {code}
> the method
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String
>  excludeDatanodes){
>     HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>for (String host : StringUtils
>  .getTrimmedStringCollection(excludeDatanodes)) {
>  int idx = host.indexOf(":");
>if (idx != -1) { 
> excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr(
>host.substring(0, idx), Integer.parseInt(host.substring(idx + 
> 1;
>} else {
>   
> excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280
>}
>   }
> }
> }
> {code}
> when datanode(e.g.hadoop2) is {color:#d04437}just  wiped before 
> line280{color}, or{color:#33} 
> {color}{color:#ff}we{color}{color:#ff} give the wrong DN 
> name{color}*,*then  bm.getDatanodeManager().getDatanodeByHost(host) will 
> return null, *_excludes_* *containes null*. while *_excludes_* are used 
> later, NPE happens:
> {code:java}
> java.lang.NullPointerException
> at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113)
> at 
> org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-14216) NullPointerException happens in NamenodeWebHdfs

2019-10-02 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reopened HDFS-14216:


Reopen for branch-3.1. The only thing different is the LOG class change. Can't 
use parameterized logging.

> NullPointerException happens in NamenodeWebHdfs
> ---
>
> Key: HDFS-14216
> URL: https://issues.apache.org/jira/browse/HDFS-14216
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Critical
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HDFS-14216_1.patch, HDFS-14216_2.patch, 
> HDFS-14216_3.patch, HDFS-14216_4.patch, HDFS-14216_5.patch, 
> HDFS-14216_6.patch, hadoop-hires-namenode-hadoop11.log
>
>
>  workload
> {code:java}
> curl -i -X PUT -T $HOMEPARH/test.txt 
> "http://hadoop1:9870/webhdfs/v1/input?op=CREATE=hadoop2;
> {code}
> the method
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(String
>  excludeDatanodes){
>     HashSet excludes = new HashSet();
> if (excludeDatanodes != null) {
>for (String host : StringUtils
>  .getTrimmedStringCollection(excludeDatanodes)) {
>  int idx = host.indexOf(":");
>if (idx != -1) { 
> excludes.add(bm.getDatanodeManager().getDatanodeByXferAddr(
>host.substring(0, idx), Integer.parseInt(host.substring(idx + 
> 1;
>} else {
>   
> excludes.add(bm.getDatanodeManager().getDatanodeByHost(host));//line280
>}
>   }
> }
> }
> {code}
> when datanode(e.g.hadoop2) is {color:#d04437}just  wiped before 
> line280{color}, or{color:#33} 
> {color}{color:#ff}we{color}{color:#ff} give the wrong DN 
> name{color}*,*then  bm.getDatanodeManager().getDatanodeByHost(host) will 
> return null, *_excludes_* *containes null*. while *_excludes_* are used 
> later, NPE happens:
> {code:java}
> java.lang.NullPointerException
> at org.apache.hadoop.net.NodeBase.getPath(NodeBase.java:113)
> at 
> org.apache.hadoop.net.NetworkTopology.countNumOfAvailableNodes(NetworkTopology.java:672)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:533)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.chooseDatanode(NamenodeWebHdfsMethods.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.redirectURI(NamenodeWebHdfsMethods.java:384)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.put(NamenodeWebHdfsMethods.java:652)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:600)
> at 
> org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$2.run(NamenodeWebHdfsMethods.java:597)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:73)
> at org.apache.hadoop.ipc.ExternalCall.run(ExternalCall.java:30)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2830)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14610) HashMap is not thread safe. Field storageMap is typically synchronized by storageMap. However, in one place, field storageMap is not protected with synchronized.

2019-10-02 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14610:
---
Fix Version/s: 3.2.2
   3.1.4

> HashMap is not thread safe. Field storageMap is typically synchronized by 
> storageMap. However, in one place, field storageMap is not protected with 
> synchronized.
> -
>
> Key: HDFS-14610
> URL: https://issues.apache.org/jira/browse/HDFS-14610
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Paul Ward
>Assignee: Paul Ward
>Priority: Critical
>  Labels: fix-provided, patch-available
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: addingSynchronization.patch
>
>
> I submitted a CR for this issue at:
> [https://github.com/apache/hadoop/pull/1015]
> The field *storageMap* (a *HashMap*)
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L155]
> is typically protected by synchronization on *storageMap*, e.g.,
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L294]
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L443]
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L484]
> For a total of 9 locations.
> The reason is because *HashMap* is not thread safe.
> However, here:
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L455]
> {{DatanodeStorageInfo storage =}}
> {{   storageMap.get(report.getStorage().getStorageID());}}
> It is not synchronized.
> Note that in the same method:
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L484]
> *storageMap* is again protected by synchronization:
> {{synchronized (storageMap) {}}
> {{   storageMapSize = storageMap.size();}}
> {{}}}
>  
> The CR I inlined above protected the above instance (line 455 ) with 
> synchronization
>  like in line 484 and in all other occurrences.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1554) Create disk tests for fault injection test

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1554?focusedWorklogId=322290=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322290
 ]

ASF GitHub Bot logged work on HDDS-1554:


Author: ASF GitHub Bot
Created on: 02/Oct/19 21:34
Start Date: 02/Oct/19 21:34
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #990: HDDS-1554. Create 
disk tests for fault injection test
URL: https://github.com/apache/hadoop/pull/990
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322290)
Time Spent: 40m  (was: 0.5h)

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch, HDDS-1554.004.patch, HDDS-1554.005.patch, 
> HDDS-1554.006.patch, HDDS-1554.007.patch, HDDS-1554.008.patch, 
> HDDS-1554.009.patch, HDDS-1554.010.patch, HDDS-1554.011.patch, 
> HDDS-1554.012.patch, HDDS-1554.013.patch, HDDS-1554.014.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-2229) ozonefs paths need host and port information for non HA environment

2019-10-02 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943175#comment-16943175
 ] 

Siyao Meng edited comment on HDDS-2229 at 10/2/19 9:30 PM:
---

[~nmaheshwari] As it turns out the reason for the failure is that hdfs shell 
didn't have ozone-site.xml in its config path. Therefore, it couldn't read the 
ozone.om.address config, which is required when the host and port number are 
omitted. This is not a bug on the code side.
Anyway, thanks for filing this. Closing this jira now.


was (Author: smeng):
[~nmaheshwari] As it turns out the reason for the failure is that hdfs shell 
didn't have ozone-site.xml in its config path. Therefore, it couldn't read the 
ozone.om.address config, which is required when the host and port is ignored. 
So this is not a bug on the code side.
Anyway, thanks for filing this. Closing this jira now.

> ozonefs paths need host and port information for non HA environment
> ---
>
> Key: HDDS-2229
> URL: https://issues.apache.org/jira/browse/HDDS-2229
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Major
>
>  
> For non HA environment ozonefs path need to have host and port info, like 
> below:
> o3fs://bucket.volume.om-host:port/key
> Whereas, for HA environments the path will change to use nameservice like 
> below:
> o3fs://bucket.volume.ns1/key
> This will create ambiguity. User experience should be the same irrespective 
> of the usage. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2229) ozonefs paths need host and port information for non HA environment

2019-10-02 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943175#comment-16943175
 ] 

Siyao Meng commented on HDDS-2229:
--

[~nmaheshwari] As it turns out the reason for the failure is that hdfs shell 
didn't have ozone-site.xml in its config path. Therefore, it couldn't read the 
ozone.om.address config, which is required when the host and port is ignored. 
So this is not a bug on the code side.
Anyway, thanks for filing this. Closing this jira now.

> ozonefs paths need host and port information for non HA environment
> ---
>
> Key: HDDS-2229
> URL: https://issues.apache.org/jira/browse/HDDS-2229
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Major
>
>  
> For non HA environment ozonefs path need to have host and port info, like 
> below:
> o3fs://bucket.volume.om-host:port/key
> Whereas, for HA environments the path will change to use nameservice like 
> below:
> o3fs://bucket.volume.ns1/key
> This will create ambiguity. User experience should be the same irrespective 
> of the usage. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2229) ozonefs paths need host and port information for non HA environment

2019-10-02 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng resolved HDDS-2229.
--
Resolution: Not A Bug

> ozonefs paths need host and port information for non HA environment
> ---
>
> Key: HDDS-2229
> URL: https://issues.apache.org/jira/browse/HDDS-2229
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Major
>
>  
> For non HA environment ozonefs path need to have host and port info, like 
> below:
> o3fs://bucket.volume.om-host:port/key
> Whereas, for HA environments the path will change to use nameservice like 
> below:
> o3fs://bucket.volume.ns1/key
> This will create ambiguity. User experience should be the same irrespective 
> of the usage. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2236) Remove default http-bind-host from ozone-default.xml

2019-10-02 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2236.
--
Resolution: Not A Problem

Discussed offline with [~arp], this was an intentional choice to use 
http.bind.host to 0.0.0.0 so that on multihomed environments/normal clusters it 
binds to all interfaces.

> Remove default http-bind-host from ozone-default.xml
> 
>
> Key: HDDS-2236
> URL: https://issues.apache.org/jira/browse/HDDS-2236
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
>
> Right now, in the code to get HttpBindAddress
>  
> final Optional bindHost =
>  getHostNameFromConfigKeys(conf, bindHostKey);
> final Optional addressPort =
>  getPortNumberFromConfigKeys(conf, addressKey);
> final Optional addressHost =
>  getHostNameFromConfigKeys(conf, addressKey);
> String hostName = bindHost.orElse(addressHost.orElse(bindHostDefault));
> return NetUtils.createSocketAddr(
>  hostName + ":" + addressPort.orElse(bindPortdefault));
>  
> So, if http-address is mentioned in the config with some hostname, still 
> bind-host (0.0.0.0) will be used, as ozone-default.xml has value for 
> http-bind-host with 0.0.0.0.
>  
> Like this, we need to delete the default 0.0.0.0 for recon,freon,datanode.
> 
>  ozone.om.http-bind-host
>  0.0.0.0
>  OM, MANAGEMENT
>  
>  The actual address the OM web server will bind to. If this optional
>  the address is set, it overrides only the hostname portion of
>  ozone.om.http-address.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14889) Ability to check if a block has a replica on provided storage

2019-10-02 Thread Ashvin Agrawal (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943166#comment-16943166
 ] 

Ashvin Agrawal commented on HDFS-14889:
---

Thanks for the review [~elgoiri]. I have updated the PR.

> Ability to check if a block has a replica on provided storage
> -
>
> Key: HDFS-14889
> URL: https://issues.apache.org/jira/browse/HDFS-14889
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ashvin Agrawal
>Assignee: Ashvin Agrawal
>Priority: Major
>
> Provided storage (HDFS-9806) allows data on external storage systems to 
> seamlessly appear as files on HDFS. However, in the implementation today, 
> there is no easy way to distinguish a {{Block}} belonging to an external 
> provided storage volume from a block belonging to the local cluster. This 
> task simplifies this. This feature will be useful in hybrid scenarios when 
> the local cluster will host both kinds of blocks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14527) Stop all DataNodes may result in NN terminate

2019-10-02 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943165#comment-16943165
 ] 

Wei-Chiu Chuang commented on HDFS-14527:


Patch applies cleanly in branch-3.2 also.
But it doesn't compile in branch-3.1. I'll provide a patch shortly.

> Stop all DataNodes may result in NN terminate
> -
>
> Key: HDFS-14527
> URL: https://issues.apache.org/jira/browse/HDFS-14527
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.3.0, 3.2.2
>
> Attachments: HDFS-14527.001.patch, HDFS-14527.002.patch, 
> HDFS-14527.003.patch, HDFS-14527.004.patch, HDFS-14527.005.patch
>
>
> If we stop all datanodes of cluster, BlockPlacementPolicyDefault#chooseTarget 
> may get ArithmeticException when calling #getMaxNodesPerRack, which throws 
> the runtime exception out to BlockManager's ReplicationMonitor thread and 
> then terminate the NN.
> The root cause is that BlockPlacementPolicyDefault#chooseTarget not hold the 
> global lock, and if all DataNodes are dead between 
> {{clusterMap.getNumberOfLeaves()}} and {{getMaxNodesPerRack}} then it meet 
> {{ArithmeticException}} while invoke {{getMaxNodesPerRack}}.
> {code:java}
>   private DatanodeStorageInfo[] chooseTarget(int numOfReplicas,
> Node writer,
> List chosenStorage,
> boolean returnChosenNodes,
> Set excludedNodes,
> long blocksize,
> final BlockStoragePolicy storagePolicy,
> EnumSet addBlockFlags,
> EnumMap sTypes) {
> if (numOfReplicas == 0 || clusterMap.getNumOfLeaves()==0) {
>   return DatanodeStorageInfo.EMPTY_ARRAY;
> }
> ..
> int[] result = getMaxNodesPerRack(chosenStorage.size(), numOfReplicas);
> ..
> }
> {code}
> Some detailed log show as following.
> {code:java}
> 2019-05-31 12:29:21,803 ERROR 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception. 
> java.lang.ArithmeticException: / by zero
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.getMaxNodesPerRack(BlockPlacementPolicyDefault.java:282)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:228)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:132)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationWork.chooseTargets(BlockManager.java:4533)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationWork.access$1800(BlockManager.java:4493)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1954)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1830)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4453)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4388)
> at java.lang.Thread.run(Thread.java:745)
> 2019-05-31 12:29:21,805 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> {code}
> To be honest, this is not serious bug and not reprod easily, since if we stop 
> all Datanodes and only keep NameNode lives, HDFS could be not offer service 
> normally and we could only retrieve directory. It may be one corner case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14527) Stop all DataNodes may result in NN terminate

2019-10-02 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14527:
---
Fix Version/s: 3.2.2

> Stop all DataNodes may result in NN terminate
> -
>
> Key: HDFS-14527
> URL: https://issues.apache.org/jira/browse/HDFS-14527
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.3.0, 3.2.2
>
> Attachments: HDFS-14527.001.patch, HDFS-14527.002.patch, 
> HDFS-14527.003.patch, HDFS-14527.004.patch, HDFS-14527.005.patch
>
>
> If we stop all datanodes of cluster, BlockPlacementPolicyDefault#chooseTarget 
> may get ArithmeticException when calling #getMaxNodesPerRack, which throws 
> the runtime exception out to BlockManager's ReplicationMonitor thread and 
> then terminate the NN.
> The root cause is that BlockPlacementPolicyDefault#chooseTarget not hold the 
> global lock, and if all DataNodes are dead between 
> {{clusterMap.getNumberOfLeaves()}} and {{getMaxNodesPerRack}} then it meet 
> {{ArithmeticException}} while invoke {{getMaxNodesPerRack}}.
> {code:java}
>   private DatanodeStorageInfo[] chooseTarget(int numOfReplicas,
> Node writer,
> List chosenStorage,
> boolean returnChosenNodes,
> Set excludedNodes,
> long blocksize,
> final BlockStoragePolicy storagePolicy,
> EnumSet addBlockFlags,
> EnumMap sTypes) {
> if (numOfReplicas == 0 || clusterMap.getNumOfLeaves()==0) {
>   return DatanodeStorageInfo.EMPTY_ARRAY;
> }
> ..
> int[] result = getMaxNodesPerRack(chosenStorage.size(), numOfReplicas);
> ..
> }
> {code}
> Some detailed log show as following.
> {code:java}
> 2019-05-31 12:29:21,803 ERROR 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception. 
> java.lang.ArithmeticException: / by zero
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.getMaxNodesPerRack(BlockPlacementPolicyDefault.java:282)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:228)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:132)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationWork.chooseTargets(BlockManager.java:4533)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationWork.access$1800(BlockManager.java:4493)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1954)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1830)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4453)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4388)
> at java.lang.Thread.run(Thread.java:745)
> 2019-05-31 12:29:21,805 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> {code}
> To be honest, this is not serious bug and not reprod easily, since if we stop 
> all Datanodes and only keep NameNode lives, HDFS could be not offer service 
> normally and we could only retrieve directory. It may be one corner case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14808) EC: Improper size values for corrupt ec block in LOG

2019-10-02 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14808:
---
Fix Version/s: 3.2.2
   3.1.4

> EC: Improper size values for corrupt ec block in LOG 
> -
>
> Key: HDFS-14808
> URL: https://issues.apache.org/jira/browse/HDFS-14808
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Reporter: Harshakiran Reddy
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14808-01.patch
>
>
> If the block corruption reason is size mismatch the log. The values shown and 
> compared are ambiguous.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14699) Erasure Coding: Storage not considered in live replica when replication streams hard limit reached to threshold

2019-10-02 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14699:
---
Fix Version/s: 3.1.4

> Erasure Coding: Storage not considered in live replica when replication 
> streams hard limit reached to threshold
> ---
>
> Key: HDFS-14699
> URL: https://issues.apache.org/jira/browse/HDFS-14699
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.2.0, 3.1.1, 3.3.0
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
>Priority: Critical
>  Labels: patch
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14699.00.patch, HDFS-14699.01.patch, 
> HDFS-14699.02.patch, HDFS-14699.03.patch, HDFS-14699.04.patch, 
> HDFS-14699.05.patch, image-2019-08-20-19-58-51-872.png, 
> image-2019-09-02-17-51-46-742.png
>
>
> We are tried the EC function on 80 node cluster with hadoop 3.1.1, we hit the 
> same scenario as you said https://issues.apache.org/jira/browse/HDFS-8881. 
> Following are our testing steps, hope it can helpful.(following DNs have the 
> testing internal blocks)
>  # we customized a new 10-2-1024k policy and use it on a path, now we have 12 
> internal block(12 live block)
>  # decommission one DN, after the decommission complete. now we have 13 
> internal block(12 live block and 1 decommission block)
>  # then shutdown one DN which did not have the same block id as 1 
> decommission block, now we have 12 internal block(11 live block and 1 
> decommission block)
>  # after wait for about 600s (before the heart beat come) commission the 
> decommissioned DN again, now we have 12 internal block(11 live block and 1 
> duplicate block)
>  # Then the EC is not reconstruct the missed block
> We think this is a critical issue for using the EC function in a production 
> env. Could you help? Thanks a lot!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14754) Erasure Coding : The number of Under-Replicated Blocks never reduced

2019-10-02 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14754:
---
Status: Patch Available  (was: Reopened)

Reopen & submit the branch-3.1 patch.
Branch-3.2 was cherry picked without conflict.

> Erasure Coding :  The number of Under-Replicated Blocks never reduced
> -
>
> Key: HDFS-14754
> URL: https://issues.apache.org/jira/browse/HDFS-14754
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Critical
> Fix For: 3.3.0, 3.2.2
>
> Attachments: HDFS-14754-addendum.001.patch, HDFS-14754.001.patch, 
> HDFS-14754.002.patch, HDFS-14754.003.patch, HDFS-14754.004.patch, 
> HDFS-14754.005.patch, HDFS-14754.006.patch, HDFS-14754.007.patch, 
> HDFS-14754.008.patch, HDFS-14754.branch-3.1.patch
>
>
> Using EC RS-3-2, 6 DN 
> We came accross a scenario where in the EC 5 blocks , same block is 
> replicated thrice and two blocks got missing
> Replicated block was not deleting and missing block is not able to ReConstruct



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-14754) Erasure Coding : The number of Under-Replicated Blocks never reduced

2019-10-02 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reopened HDFS-14754:


> Erasure Coding :  The number of Under-Replicated Blocks never reduced
> -
>
> Key: HDFS-14754
> URL: https://issues.apache.org/jira/browse/HDFS-14754
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Critical
> Fix For: 3.3.0, 3.2.2
>
> Attachments: HDFS-14754-addendum.001.patch, HDFS-14754.001.patch, 
> HDFS-14754.002.patch, HDFS-14754.003.patch, HDFS-14754.004.patch, 
> HDFS-14754.005.patch, HDFS-14754.006.patch, HDFS-14754.007.patch, 
> HDFS-14754.008.patch, HDFS-14754.branch-3.1.patch
>
>
> Using EC RS-3-2, 6 DN 
> We came accross a scenario where in the EC 5 blocks , same block is 
> replicated thrice and two blocks got missing
> Replicated block was not deleting and missing block is not able to ReConstruct



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2223) Support ReadWrite lock in LockManager

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2223?focusedWorklogId=322234=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322234
 ]

ASF GitHub Bot logged work on HDDS-2223:


Author: ASF GitHub Bot
Created on: 02/Oct/19 20:39
Start Date: 02/Oct/19 20:39
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1564: 
HDDS-2223. Support ReadWrite lock in LockManager.
URL: https://github.com/apache/hadoop/pull/1564#discussion_r330760039
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/LockManager.java
 ##
 @@ -25,42 +25,146 @@
 
 import java.util.Map;
 import java.util.concurrent.ConcurrentHashMap;
+import java.util.function.Consumer;
 
 /**
  * Manages the locks on a given resource. A new lock is created for each
  * and every unique resource. Uniqueness of resource depends on the
  * {@code equals} implementation of it.
  */
-public class LockManager {
+public class LockManager {
 
   private static final Logger LOG = LoggerFactory.getLogger(LockManager.class);
 
-  private final Map activeLocks = new ConcurrentHashMap<>();
+  private final Map activeLocks = new ConcurrentHashMap<>();
   private final GenericObjectPool lockPool =
   new GenericObjectPool<>(new PooledLockFactory());
 
   /**
-   * Creates new LockManager instance.
+   * Creates new LockManager instance with the given Configuration.
*
* @param conf Configuration object
*/
-  public LockManager(Configuration conf) {
-int maxPoolSize = conf.getInt(HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY,
+  public LockManager(final Configuration conf) {
+final int maxPoolSize = 
conf.getInt(HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY,
 HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY_DEFAULT);
 lockPool.setMaxTotal(maxPoolSize);
   }
 
-
   /**
* Acquires the lock on given resource.
*
* If the lock is not available then the current thread becomes
* disabled for thread scheduling purposes and lies dormant until the
* lock has been acquired.
+   *
+   * @param resource on which the lock has to be acquired
+   * @deprecated Use {@link LockManager#writeLock} instead
+   */
+  public void lock(final R resource) {
+   writeLock(resource);
+  }
+
+  /**
+   * Releases the lock on given resource.
+   *
+   * @param resource for which the lock has to be released
+   * @deprecated Use {@link LockManager#writeUnlock} instead
+   */
+  public void unlock(final R resource) {
+   writeUnlock(resource);
+  }
+
+  /**
+   * Acquires the read lock on given resource.
+   *
+   * Acquires the read lock on resource if the write lock is not held by
+   * another thread and returns immediately.
+   *
+   * If the write lock on resource is held by another thread then
+   * the current thread becomes disabled for thread scheduling
+   * purposes and lies dormant until the read lock has been acquired.
+   *
+   * @param resource on which the read lock has to be acquired
+   */
+  public void readLock(final R resource) {
+acquire(resource, ActiveLock::readLock);
+  }
+
+  /**
+   * Releases the read lock on given resource.
+   *
+   * @param resource for which the read lock has to be released
+   * @throws IllegalMonitorStateException if the current thread does not
+   *  hold this lock
+   */
+  public void readUnlock(final R resource) throws IllegalMonitorStateException 
{
+release(resource, ActiveLock::readUnlock);
+  }
+
+  /**
+   * Acquires the write lock on given resource.
+   *
+   * Acquires the write lock on resource if neither the read nor write lock
+   * are held by another thread and returns immediately.
+   *
+   * If the current thread already holds the write lock then the
+   * hold count is incremented by one and the method returns
+   * immediately.
+   *
+   * If the lock is held by another thread then the current
+   * thread becomes disabled for thread scheduling purposes and
+   * lies dormant until the write lock has been acquired.
+   *
+   * @param resource on which the lock has to be acquired
*/
-  public void lock(T resource) {
-activeLocks.compute(resource, (k, v) -> {
-  ActiveLock lock;
+  public void writeLock(final R resource) {
+acquire(resource, ActiveLock::writeLock);
+  }
+
+  /**
+   * Releases the write lock on given resource.
+   *
+   * @param resource for which the lock has to be released
+   * @throws IllegalMonitorStateException if the current thread does not
+   *  hold this lock
+   */
+  public void writeUnlock(final R resource) throws 
IllegalMonitorStateException {
+release(resource, ActiveLock::writeUnlock);
+  }
+
+  /**
+   * Acquires the lock on given resource using the provided lock function.
+   *
+   * @param resource on which the lock has to be acquired
+   * @param lockFn function to 

[jira] [Updated] (HDFS-14754) Erasure Coding : The number of Under-Replicated Blocks never reduced

2019-10-02 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14754:
---
Fix Version/s: 3.2.2

> Erasure Coding :  The number of Under-Replicated Blocks never reduced
> -
>
> Key: HDFS-14754
> URL: https://issues.apache.org/jira/browse/HDFS-14754
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Critical
> Fix For: 3.3.0, 3.2.2
>
> Attachments: HDFS-14754-addendum.001.patch, HDFS-14754.001.patch, 
> HDFS-14754.002.patch, HDFS-14754.003.patch, HDFS-14754.004.patch, 
> HDFS-14754.005.patch, HDFS-14754.006.patch, HDFS-14754.007.patch, 
> HDFS-14754.008.patch, HDFS-14754.branch-3.1.patch
>
>
> Using EC RS-3-2, 6 DN 
> We came accross a scenario where in the EC 5 blocks , same block is 
> replicated thrice and two blocks got missing
> Replicated block was not deleting and missing block is not able to ReConstruct



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make OM Generic related configuration support HA style config

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=322233=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322233
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 02/Oct/19 20:38
Start Date: 02/Oct/19 20:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1511: HDDS-2162. Make 
OM Generic related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#issuecomment-537670279
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for branch |
   | -1 | mvninstall | 29 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 36 | hadoop-ozone in trunk failed. |
   | -1 | compile | 23 | hadoop-hdds in trunk failed. |
   | -1 | compile | 14 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 60 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1038 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 19 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1137 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 35 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 19 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | -1 | mvninstall | 37 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 41 | hadoop-ozone in the patch failed. |
   | -1 | compile | 25 | hadoop-hdds in the patch failed. |
   | -1 | compile | 18 | hadoop-ozone in the patch failed. |
   | -1 | javac | 25 | hadoop-hdds in the patch failed. |
   | -1 | javac | 18 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 64 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 875 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 20 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 36 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 20 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 29 | hadoop-hdds in the patch failed. |
   | -1 | unit | 27 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 35 | The patch does not generate ASF License warnings. |
   | | | 2751 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1511 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 865a7f64ca11 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 685918e |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/5/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/5/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/5/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/5/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/5/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/5/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/5/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/5/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/5/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 

[jira] [Work logged] (HDDS-2200) Recon does not handle the NULL snapshot from OM DB cleanly.

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2200?focusedWorklogId=322230=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322230
 ]

ASF GitHub Bot logged work on HDDS-2200:


Author: ASF GitHub Bot
Created on: 02/Oct/19 20:36
Start Date: 02/Oct/19 20:36
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1577: HDDS-2200 : 
Recon does not handle the NULL snapshot from OM DB cleanly.
URL: https://github.com/apache/hadoop/pull/1577#issuecomment-537669582
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 82 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 48 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 41 | hadoop-ozone in trunk failed. |
   | -1 | compile | 19 | hadoop-hdds in trunk failed. |
   | -1 | compile | 13 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 60 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 958 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 16 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1048 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 33 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 17 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 31 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 34 | hadoop-ozone in the patch failed. |
   | -1 | compile | 21 | hadoop-hdds in the patch failed. |
   | -1 | compile | 15 | hadoop-ozone in the patch failed. |
   | -1 | javac | 21 | hadoop-hdds in the patch failed. |
   | -1 | javac | 15 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 52 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 806 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 18 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 31 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 19 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 27 | hadoop-hdds in the patch failed. |
   | -1 | unit | 24 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 2561 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1577 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 64c3bf470a41 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 685918e |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 

[jira] [Work logged] (HDDS-2223) Support ReadWrite lock in LockManager

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2223?focusedWorklogId=37=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-37
 ]

ASF GitHub Bot logged work on HDDS-2223:


Author: ASF GitHub Bot
Created on: 02/Oct/19 20:34
Start Date: 02/Oct/19 20:34
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1564: HDDS-2223. 
Support ReadWrite lock in LockManager.
URL: https://github.com/apache/hadoop/pull/1564#issuecomment-537668863
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 71 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 46 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 39 | hadoop-ozone in trunk failed. |
   | -1 | compile | 19 | hadoop-hdds in trunk failed. |
   | -1 | compile | 12 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 58 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 952 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 20 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 16 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1042 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 34 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 16 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 30 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 35 | hadoop-ozone in the patch failed. |
   | -1 | compile | 21 | hadoop-hdds in the patch failed. |
   | -1 | compile | 15 | hadoop-ozone in the patch failed. |
   | -1 | javac | 21 | hadoop-hdds in the patch failed. |
   | -1 | javac | 15 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 51 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 798 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 20 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 16 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 100 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 17 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 23 | hadoop-hdds in the patch failed. |
   | -1 | unit | 23 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 2579 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1564 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d6eb97a38316 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 685918e |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 

[jira] [Work logged] (HDDS-2158) Fix Json Injection in JsonUtils

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2158?focusedWorklogId=30=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-30
 ]

ASF GitHub Bot logged work on HDDS-2158:


Author: ASF GitHub Bot
Created on: 02/Oct/19 20:26
Start Date: 02/Oct/19 20:26
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1486: 
HDDS-2158. Fixing Json Injection Issue in JsonUtils.
URL: https://github.com/apache/hadoop/pull/1486#discussion_r330754661
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/RemoveAclBucketHandler.java
 ##
 @@ -92,8 +92,9 @@ public Void call() throws Exception {
 boolean result = client.getObjectStore().removeAcl(obj,
 OzoneAcl.parseAcl(acl));
 
-System.out.printf("%s%n", JsonUtils.toJsonStringWithDefaultPrettyPrinter(
-JsonUtils.toJsonString("Acl removed successfully: " + result)));
+System.out.printf("%s%n", result ? "ACL removed successfully" :
+"ACL not removed");
 
 Review comment:
   From my understanding, addAcl behavior is if acl is added successfully 
returns true, it will return false when acl trying to be added already exists. 
   
   > 
   If we are trying to add an already existing ACL, shouldn't we return true?
   
   I think returning true is not right behavior, as it will not be clear 
whether add is successful or not. We should have returned with clear message to 
end user, what is differenece between true/false. 
   
   `But I think that statement also does not convey the message properly. `
   
   Agreed this was existing behavior, if you want to fix in a new Jira I am 
okay with that.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 30)
Time Spent: 3h 20m  (was: 3h 10m)

> Fix Json Injection in JsonUtils
> ---
>
> Key: HDDS-2158
> URL: https://issues.apache.org/jira/browse/HDDS-2158
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> JsonUtils#toJsonStringWithDefaultPrettyPrinter() does not validate the Json 
> String  before serializing it which could result in Json Injection.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2236) Remove default http-bind-host from ozone-default.xml

2019-10-02 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2236:
-
Description: 
Right now, in the code to get HttpBindAddress

 

final Optional bindHost =
 getHostNameFromConfigKeys(conf, bindHostKey);

final Optional addressPort =
 getPortNumberFromConfigKeys(conf, addressKey);

final Optional addressHost =
 getHostNameFromConfigKeys(conf, addressKey);

String hostName = bindHost.orElse(addressHost.orElse(bindHostDefault));

return NetUtils.createSocketAddr(
 hostName + ":" + addressPort.orElse(bindPortdefault));

 

So, if http-address is mentioned in the config with some hostname, still 
bind-host (0.0.0.0) will be used, as ozone-default.xml has value for 
http-bind-host with 0.0.0.0.

 

Like this, we need to delete the default 0.0.0.0 for recon,freon,datanode.


 ozone.om.http-bind-host
 0.0.0.0
 OM, MANAGEMENT
 
 The actual address the OM web server will bind to. If this optional
 the address is set, it overrides only the hostname portion of
 ozone.om.http-address.
 

  was:
Right now, in the code to get HttpBindAddress

 

final Optional bindHost =
 getHostNameFromConfigKeys(conf, bindHostKey);

final Optional addressPort =
 getPortNumberFromConfigKeys(conf, addressKey);

final Optional addressHost =
 getHostNameFromConfigKeys(conf, addressKey);

String hostName = bindHost.orElse(addressHost.orElse(bindHostDefault));

return NetUtils.createSocketAddr(
 hostName + ":" + addressPort.orElse(bindPortdefault));

 

So, if http-address is mentioned with some hostname, still bind-host (0.0.0.0) 
will be used.

 

Like this, we need to delete the default 0.0.0.0 for recon,freon,datanode.


 ozone.om.http-bind-host
 0.0.0.0
 OM, MANAGEMENT
 
 The actual address the OM web server will bind to. If this optional
 the address is set, it overrides only the hostname portion of
 ozone.om.http-address.
 


> Remove default http-bind-host from ozone-default.xml
> 
>
> Key: HDDS-2236
> URL: https://issues.apache.org/jira/browse/HDDS-2236
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
>
> Right now, in the code to get HttpBindAddress
>  
> final Optional bindHost =
>  getHostNameFromConfigKeys(conf, bindHostKey);
> final Optional addressPort =
>  getPortNumberFromConfigKeys(conf, addressKey);
> final Optional addressHost =
>  getHostNameFromConfigKeys(conf, addressKey);
> String hostName = bindHost.orElse(addressHost.orElse(bindHostDefault));
> return NetUtils.createSocketAddr(
>  hostName + ":" + addressPort.orElse(bindPortdefault));
>  
> So, if http-address is mentioned in the config with some hostname, still 
> bind-host (0.0.0.0) will be used, as ozone-default.xml has value for 
> http-bind-host with 0.0.0.0.
>  
> Like this, we need to delete the default 0.0.0.0 for recon,freon,datanode.
> 
>  ozone.om.http-bind-host
>  0.0.0.0
>  OM, MANAGEMENT
>  
>  The actual address the OM web server will bind to. If this optional
>  the address is set, it overrides only the hostname portion of
>  ozone.om.http-address.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2236) Remove default http-bind-host from ozone-default.xml

2019-10-02 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2236:


 Summary: Remove default http-bind-host from ozone-default.xml
 Key: HDDS-2236
 URL: https://issues.apache.org/jira/browse/HDDS-2236
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


Right now, in the code to get HttpBindAddress

 

final Optional bindHost =
 getHostNameFromConfigKeys(conf, bindHostKey);

final Optional addressPort =
 getPortNumberFromConfigKeys(conf, addressKey);

final Optional addressHost =
 getHostNameFromConfigKeys(conf, addressKey);

String hostName = bindHost.orElse(addressHost.orElse(bindHostDefault));

return NetUtils.createSocketAddr(
 hostName + ":" + addressPort.orElse(bindPortdefault));

 

So, if http-address is mentioned with some hostname, still bind-host (0.0.0.0) 
will be used.

 

Like this, we need to delete the default 0.0.0.0 for recon,freon,datanode.


 ozone.om.http-bind-host
 0.0.0.0
 OM, MANAGEMENT
 
 The actual address the OM web server will bind to. If this optional
 the address is set, it overrides only the hostname portion of
 ozone.om.http-address.
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2200) Recon does not handle the NULL snapshot from OM DB cleanly.

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2200?focusedWorklogId=322200=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322200
 ]

ASF GitHub Bot logged work on HDDS-2200:


Author: ASF GitHub Bot
Created on: 02/Oct/19 20:03
Start Date: 02/Oct/19 20:03
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #1577: HDDS-2200 : Recon 
does not handle the NULL snapshot from OM DB cleanly.
URL: https://github.com/apache/hadoop/pull/1577#issuecomment-537656726
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322200)
Time Spent: 0.5h  (was: 20m)

> Recon does not handle the NULL snapshot from OM DB cleanly.
> ---
>
> Key: HDDS-2200
> URL: https://issues.apache.org/jira/browse/HDDS-2200
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {code}
> 2019-09-27 11:35:19,835 [pool-9-thread-1] ERROR  - Null snapshot location 
> got from OM.
> 2019-09-27 11:35:19,839 [pool-9-thread-1] INFO   - Calling reprocess on 
> Recon tasks.
> 2019-09-27 11:35:19,840 [pool-7-thread-1] INFO   - Starting a 'reprocess' 
> run of ContainerKeyMapperTask.
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Creating new Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609319840
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Cleaning up old Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609258721.
> 2019-09-27 11:35:20,144 [pool-9-thread-1] ERROR  - Unexpected error :
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.reInitializeTasks(ReconTaskControllerImpl.java:181)
> at 
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl.syncDataFromOM(OzoneManagerServiceProviderImpl.java:333)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.recon.tasks.ContainerKeyMapperTask.reprocess(ContainerKeyMapperTask.java:81)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.lambda$reInitializeTasks$3(ReconTaskControllerImpl.java:176)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2200) Recon does not handle the NULL snapshot from OM DB cleanly.

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2200?focusedWorklogId=322195=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322195
 ]

ASF GitHub Bot logged work on HDDS-2200:


Author: ASF GitHub Bot
Created on: 02/Oct/19 19:55
Start Date: 02/Oct/19 19:55
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #1577: HDDS-2200 : Recon 
does not handle the NULL snapshot from OM DB cleanly.
URL: https://github.com/apache/hadoop/pull/1577#issuecomment-537653624
 
 
   cc @vivekratnavel / @shwetayakkali / @swagle 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322195)
Time Spent: 20m  (was: 10m)

> Recon does not handle the NULL snapshot from OM DB cleanly.
> ---
>
> Key: HDDS-2200
> URL: https://issues.apache.org/jira/browse/HDDS-2200
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code}
> 2019-09-27 11:35:19,835 [pool-9-thread-1] ERROR  - Null snapshot location 
> got from OM.
> 2019-09-27 11:35:19,839 [pool-9-thread-1] INFO   - Calling reprocess on 
> Recon tasks.
> 2019-09-27 11:35:19,840 [pool-7-thread-1] INFO   - Starting a 'reprocess' 
> run of ContainerKeyMapperTask.
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Creating new Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609319840
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Cleaning up old Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609258721.
> 2019-09-27 11:35:20,144 [pool-9-thread-1] ERROR  - Unexpected error :
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.reInitializeTasks(ReconTaskControllerImpl.java:181)
> at 
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl.syncDataFromOM(OzoneManagerServiceProviderImpl.java:333)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.recon.tasks.ContainerKeyMapperTask.reprocess(ContainerKeyMapperTask.java:81)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.lambda$reInitializeTasks$3(ReconTaskControllerImpl.java:176)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14284) RBF: Log Router identifier when reporting exceptions

2019-10-02 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943111#comment-16943111
 ] 

Brahma Reddy Battula edited comment on HDFS-14284 at 10/2/19 7:53 PM:
--

Ok.Just I want to confirm when router is can't access state store we can 
shutdown the router.
{quote}This shouldn't break compatibility as it would be a new field in the new 
remote exception.
{quote}
I was talking about "new NoNamenodesAvailableException"  where we are going to 
add one more field( and this exception was introduced before release). I was 
concerned about this.

[~ayushtkn] and [~inigoiri], if you both are ok. Then I am ok.

 

[~hemanthboyina] you can update the patch,as [~crh] suggested.

 


was (Author: brahmareddy):
Ok.Just I want to confirm when router is can't access state store we can 
shutdown the router.
{quote}This shouldn't break compatibility as it would be a new field in the new 
remote exception.
{quote}
I was talking about "new NoNamenodesAvailableException"  where we are going to 
add one more field( and this exception was introduced b. I was concerned about 
this.

[~ayushtkn] and [~inigoiri], if you both are ok. Then I am ok.

 

[~hemanthboyina] you can update the patch,as [~crh] suggested.

 

> RBF: Log Router identifier when reporting exceptions
> 
>
> Key: HDFS-14284
> URL: https://issues.apache.org/jira/browse/HDFS-14284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14284.001.patch, HDFS-14284.002.patch
>
>
> The typical setup is to use multiple Routers through 
> ConfiguredFailoverProxyProvider.
> In a regular HA Namenode setup, it is easy to know which NN was used.
> However, in RBF, any Router can be the one reporting the exception and it is 
> hard to know which was the one.
> We should have a way to identify which Router/Namenode was the one triggering 
> the exception.
> This would also apply with Observer Namenodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2200) Recon does not handle the NULL snapshot from OM DB cleanly.

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2200?focusedWorklogId=322193=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322193
 ]

ASF GitHub Bot logged work on HDDS-2200:


Author: ASF GitHub Bot
Created on: 02/Oct/19 19:52
Start Date: 02/Oct/19 19:52
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #1577: HDDS-2200 
: Recon does not handle the NULL snapshot from OM DB cleanly.
URL: https://github.com/apache/hadoop/pull/1577
 
 
   - Fix NULL OM snapshot handling in Recon.
   - Bootstrap Recon startup with last known OM snapshot DB and Recon container 
DB.
   - Add more useful log lines. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322193)
Remaining Estimate: 0h
Time Spent: 10m

> Recon does not handle the NULL snapshot from OM DB cleanly.
> ---
>
> Key: HDDS-2200
> URL: https://issues.apache.org/jira/browse/HDDS-2200
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code}
> 2019-09-27 11:35:19,835 [pool-9-thread-1] ERROR  - Null snapshot location 
> got from OM.
> 2019-09-27 11:35:19,839 [pool-9-thread-1] INFO   - Calling reprocess on 
> Recon tasks.
> 2019-09-27 11:35:19,840 [pool-7-thread-1] INFO   - Starting a 'reprocess' 
> run of ContainerKeyMapperTask.
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Creating new Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609319840
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Cleaning up old Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609258721.
> 2019-09-27 11:35:20,144 [pool-9-thread-1] ERROR  - Unexpected error :
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.reInitializeTasks(ReconTaskControllerImpl.java:181)
> at 
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl.syncDataFromOM(OzoneManagerServiceProviderImpl.java:333)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.recon.tasks.ContainerKeyMapperTask.reprocess(ContainerKeyMapperTask.java:81)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.lambda$reInitializeTasks$3(ReconTaskControllerImpl.java:176)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2200) Recon does not handle the NULL snapshot from OM DB cleanly.

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2200:
-
Labels: pull-request-available  (was: )

> Recon does not handle the NULL snapshot from OM DB cleanly.
> ---
>
> Key: HDDS-2200
> URL: https://issues.apache.org/jira/browse/HDDS-2200
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>
> {code}
> 2019-09-27 11:35:19,835 [pool-9-thread-1] ERROR  - Null snapshot location 
> got from OM.
> 2019-09-27 11:35:19,839 [pool-9-thread-1] INFO   - Calling reprocess on 
> Recon tasks.
> 2019-09-27 11:35:19,840 [pool-7-thread-1] INFO   - Starting a 'reprocess' 
> run of ContainerKeyMapperTask.
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Creating new Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609319840
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Cleaning up old Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609258721.
> 2019-09-27 11:35:20,144 [pool-9-thread-1] ERROR  - Unexpected error :
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.reInitializeTasks(ReconTaskControllerImpl.java:181)
> at 
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl.syncDataFromOM(OzoneManagerServiceProviderImpl.java:333)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.recon.tasks.ContainerKeyMapperTask.reprocess(ContainerKeyMapperTask.java:81)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.lambda$reInitializeTasks$3(ReconTaskControllerImpl.java:176)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2227) GDPR key generation could benefit from secureRandom

2019-10-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943113#comment-16943113
 ] 

Hudson commented on HDDS-2227:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17441 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17441/])
HDDS-2227. GDPR key generation could benefit from secureRandom. (#1574) 
(github: rev 685918ef41a9fff51a1a84718097b90b4a915e68)
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
* (edit) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/security/TestGDPRSymmetricKey.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/GDPRSymmetricKey.java


> GDPR key generation could benefit from secureRandom
> ---
>
> Key: HDDS-2227
> URL: https://issues.apache.org/jira/browse/HDDS-2227
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The SecureRandom can be used for the symetric key for GDPR. While GDPR is not 
> a security feature, this is a good to have optional feature.
> I want to thank Jonathan Leitschuh, for originally noticing this issue and 
> reporting it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14284) RBF: Log Router identifier when reporting exceptions

2019-10-02 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943111#comment-16943111
 ] 

Brahma Reddy Battula commented on HDFS-14284:
-

Ok.Just I want to confirm when router is can't access state store we can 
shutdown the router.
{quote}This shouldn't break compatibility as it would be a new field in the new 
remote exception.
{quote}
I was talking about "new NoNamenodesAvailableException"  where we are going to 
add one more field( and this exception was introduced b. I was concerned about 
this.

[~ayushtkn] and [~inigoiri], if you both are ok. Then I am ok.

 

[~hemanthboyina] you can update the patch,as [~crh] suggested.

 

> RBF: Log Router identifier when reporting exceptions
> 
>
> Key: HDFS-14284
> URL: https://issues.apache.org/jira/browse/HDFS-14284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14284.001.patch, HDFS-14284.002.patch
>
>
> The typical setup is to use multiple Routers through 
> ConfiguredFailoverProxyProvider.
> In a regular HA Namenode setup, it is easy to know which NN was used.
> However, in RBF, any Router can be the one reporting the exception and it is 
> hard to know which was the one.
> We should have a way to identify which Router/Namenode was the one triggering 
> the exception.
> This would also apply with Observer Namenodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2200) Recon does not handle the NULL snapshot from OM DB cleanly.

2019-10-02 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2200 started by Aravindan Vijayan.
---
> Recon does not handle the NULL snapshot from OM DB cleanly.
> ---
>
> Key: HDDS-2200
> URL: https://issues.apache.org/jira/browse/HDDS-2200
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>
> {code}
> 2019-09-27 11:35:19,835 [pool-9-thread-1] ERROR  - Null snapshot location 
> got from OM.
> 2019-09-27 11:35:19,839 [pool-9-thread-1] INFO   - Calling reprocess on 
> Recon tasks.
> 2019-09-27 11:35:19,840 [pool-7-thread-1] INFO   - Starting a 'reprocess' 
> run of ContainerKeyMapperTask.
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Creating new Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609319840
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Cleaning up old Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609258721.
> 2019-09-27 11:35:20,144 [pool-9-thread-1] ERROR  - Unexpected error :
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.reInitializeTasks(ReconTaskControllerImpl.java:181)
> at 
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl.syncDataFromOM(OzoneManagerServiceProviderImpl.java:333)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.recon.tasks.ContainerKeyMapperTask.reprocess(ContainerKeyMapperTask.java:81)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.lambda$reInitializeTasks$3(ReconTaskControllerImpl.java:176)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2164) om.db.checkpoints is getting filling up fast

2019-10-02 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-2164:

Status: Patch Available  (was: In Progress)

> om.db.checkpoints is getting filling up fast
> 
>
> Key: HDDS-2164
> URL: https://issues.apache.org/jira/browse/HDDS-2164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nanda kumar
>Assignee: Aravindan Vijayan
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {{om.db.checkpoints}} is filling up fast, we should also clean this up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make OM Generic related configuration support HA style config

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=322188=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322188
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 02/Oct/19 19:43
Start Date: 02/Oct/19 19:43
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1511: HDDS-2162. 
Make OM Generic related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#issuecomment-537648780
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322188)
Time Spent: 6h 50m  (was: 6h 40m)

> Make OM Generic related configuration support HA style config
> -
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> -OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,-
> -OZONE_OM_KERBEROS_PRINCIPAL_KEY,-
> -OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,-
> -OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.-
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make OM Generic related configuration support HA style config

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=322186=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322186
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 02/Oct/19 19:42
Start Date: 02/Oct/19 19:42
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1511: 
HDDS-2162. Make OM Generic related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r330735520
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -309,13 +305,33 @@ private OzoneManager(OzoneConfiguration conf) throws 
IOException,
   AuthenticationException {
 super(OzoneVersionInfo.OZONE_VERSION_INFO);
 Preconditions.checkNotNull(conf);
-configuration = conf;
+configuration = new OzoneConfiguration(conf);
 
 Review comment:
   Thanks for the catch. This has caused test failure too. Reverted this back.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322186)
Time Spent: 6.5h  (was: 6h 20m)

> Make OM Generic related configuration support HA style config
> -
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> -OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,-
> -OZONE_OM_KERBEROS_PRINCIPAL_KEY,-
> -OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,-
> -OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.-
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make OM Generic related configuration support HA style config

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=322187=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322187
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 02/Oct/19 19:42
Start Date: 02/Oct/19 19:42
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1511: 
HDDS-2162. Make OM Generic related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r330735520
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -309,13 +305,33 @@ private OzoneManager(OzoneConfiguration conf) throws 
IOException,
   AuthenticationException {
 super(OzoneVersionInfo.OZONE_VERSION_INFO);
 Preconditions.checkNotNull(conf);
-configuration = conf;
+configuration = new OzoneConfiguration(conf);
 
 Review comment:
   Thanks for the catch and for bringing this up. This has caused test failure 
too. Reverted this back.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322187)
Time Spent: 6h 40m  (was: 6.5h)

> Make OM Generic related configuration support HA style config
> -
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> -OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,-
> -OZONE_OM_KERBEROS_PRINCIPAL_KEY,-
> -OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,-
> -OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.-
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2229) ozonefs paths need host and port information for non HA environment

2019-10-02 Thread Namit Maheshwari (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943104#comment-16943104
 ] 

Namit Maheshwari commented on HDDS-2229:


Discussed this with [~smeng]

{code}
-bash-4.2$ kinit -kt hadoopqa/keytabs/hdfs.headless.keytab hdfs
-bash-4.2$ hdfs dfs -ls o3fs://bucket1.volume1/
19/10/02 19:37:34 INFO ipc.Client: Retrying connect to server: 
0.0.0.0/0.0.0.0:9862. Already tried 0 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
19/10/02 19:37:35 INFO ipc.Client: Retrying connect to server: 
0.0.0.0/0.0.0.0:9862. Already tried 1 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
19/10/02 19:37:36 INFO ipc.Client: Retrying connect to server: 
0.0.0.0/0.0.0.0:9862. Already tried 2 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
^C
{code}

It does not work without host port information as seen above. 
Please see after specifying the info it works fine
{code}
-bash-4.2hdfs dfs -ls 
o3fs://bucket1.volume1.xxx-xjhgyv-4.xxx-xjhgyv.root.xxx.site:9862/
-bash-4.2$
{code}

> ozonefs paths need host and port information for non HA environment
> ---
>
> Key: HDDS-2229
> URL: https://issues.apache.org/jira/browse/HDDS-2229
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Major
>
>  
> For non HA environment ozonefs path need to have host and port info, like 
> below:
> o3fs://bucket.volume.om-host:port/key
> Whereas, for HA environments the path will change to use nameservice like 
> below:
> o3fs://bucket.volume.ns1/key
> This will create ambiguity. User experience should be the same irrespective 
> of the usage. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2198) SCM should not consider containers in CLOSING state to come out of safemode

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2198?focusedWorklogId=322185=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322185
 ]

ASF GitHub Bot logged work on HDDS-2198:


Author: ASF GitHub Bot
Created on: 02/Oct/19 19:39
Start Date: 02/Oct/19 19:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1540: HDDS-2198. SCM 
should not consider containers in CLOSING state to come out of safemode.
URL: https://github.com/apache/hadoop/pull/1540#issuecomment-537647448
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 32 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 33 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-hdds in trunk failed. |
   | -1 | compile | 15 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 60 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 851 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 21 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 967 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 46 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 21 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 36 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 36 | hadoop-ozone in the patch failed. |
   | -1 | compile | 25 | hadoop-hdds in the patch failed. |
   | -1 | compile | 19 | hadoop-ozone in the patch failed. |
   | -1 | javac | 25 | hadoop-hdds in the patch failed. |
   | -1 | javac | 19 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 57 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | shadedclient | 721 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 20 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 31 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 21 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 28 | hadoop-hdds in the patch failed. |
   | -1 | unit | 26 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 2370 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1540 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e6961ad387b9 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e8ae632 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 

[jira] [Work logged] (HDDS-2227) GDPR key generation could benefit from secureRandom

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2227?focusedWorklogId=322181=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322181
 ]

ASF GitHub Bot logged work on HDDS-2227:


Author: ASF GitHub Bot
Created on: 02/Oct/19 19:35
Start Date: 02/Oct/19 19:35
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1574: HDDS-2227. GDPR 
key generation could benefit from secureRandom.
URL: https://github.com/apache/hadoop/pull/1574#issuecomment-537646083
 
 
   @dineshchitlangia  Thank you for the review. I have committed this patch to 
the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322181)
Time Spent: 1h  (was: 50m)

> GDPR key generation could benefit from secureRandom
> ---
>
> Key: HDDS-2227
> URL: https://issues.apache.org/jira/browse/HDDS-2227
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The SecureRandom can be used for the symetric key for GDPR. While GDPR is not 
> a security feature, this is a good to have optional feature.
> I want to thank Jonathan Leitschuh, for originally noticing this issue and 
> reporting it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2227) GDPR key generation could benefit from secureRandom

2019-10-02 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2227.

Fix Version/s: 0.5.0
   Resolution: Fixed

Committed to the trunk branch.

> GDPR key generation could benefit from secureRandom
> ---
>
> Key: HDDS-2227
> URL: https://issues.apache.org/jira/browse/HDDS-2227
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The SecureRandom can be used for the symetric key for GDPR. While GDPR is not 
> a security feature, this is a good to have optional feature.
> I want to thank Jonathan Leitschuh, for originally noticing this issue and 
> reporting it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2227) GDPR key generation could benefit from secureRandom

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2227?focusedWorklogId=322180=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322180
 ]

ASF GitHub Bot logged work on HDDS-2227:


Author: ASF GitHub Bot
Created on: 02/Oct/19 19:34
Start Date: 02/Oct/19 19:34
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1574: HDDS-2227. 
GDPR key generation could benefit from secureRandom.
URL: https://github.com/apache/hadoop/pull/1574
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322180)
Time Spent: 50m  (was: 40m)

> GDPR key generation could benefit from secureRandom
> ---
>
> Key: HDDS-2227
> URL: https://issues.apache.org/jira/browse/HDDS-2227
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The SecureRandom can be used for the symetric key for GDPR. While GDPR is not 
> a security feature, this is a good to have optional feature.
> I want to thank Jonathan Leitschuh, for originally noticing this issue and 
> reporting it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2227) GDPR key generation could benefit from secureRandom

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2227?focusedWorklogId=322179=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322179
 ]

ASF GitHub Bot logged work on HDDS-2227:


Author: ASF GitHub Bot
Created on: 02/Oct/19 19:34
Start Date: 02/Oct/19 19:34
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1574: HDDS-2227. GDPR 
key generation could benefit from secureRandom.
URL: https://github.com/apache/hadoop/pull/1574#issuecomment-537645817
 
 
   The failures are not related to this patch.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322179)
Time Spent: 40m  (was: 0.5h)

> GDPR key generation could benefit from secureRandom
> ---
>
> Key: HDDS-2227
> URL: https://issues.apache.org/jira/browse/HDDS-2227
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The SecureRandom can be used for the symetric key for GDPR. While GDPR is not 
> a security feature, this is a good to have optional feature.
> I want to thank Jonathan Leitschuh, for originally noticing this issue and 
> reporting it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2227) GDPR key generation could benefit from secureRandom

2019-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2227?focusedWorklogId=322178=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-322178
 ]

ASF GitHub Bot logged work on HDDS-2227:


Author: ASF GitHub Bot
Created on: 02/Oct/19 19:34
Start Date: 02/Oct/19 19:34
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #1574: HDDS-2227. 
GDPR key generation could benefit from secureRandom.
URL: https://github.com/apache/hadoop/pull/1574#issuecomment-537645815
 
 
   +1 LGTM, failures dont seem related to patch. Thanks Anu.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 322178)
Time Spent: 0.5h  (was: 20m)

> GDPR key generation could benefit from secureRandom
> ---
>
> Key: HDDS-2227
> URL: https://issues.apache.org/jira/browse/HDDS-2227
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The SecureRandom can be used for the symetric key for GDPR. While GDPR is not 
> a security feature, this is a good to have optional feature.
> I want to thank Jonathan Leitschuh, for originally noticing this issue and 
> reporting it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2073) Make SCMSecurityProtocol message based

2019-10-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943101#comment-16943101
 ] 

Hudson commented on HDDS-2073:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17440 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17440/])
HDDS-2073. Make SCMSecurityProtocol message based. Contributed by Elek, 
(aengineer: rev ffd4e527256389d91dd8e4c49ca1681f70a790e2)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocolPB/SCMSecurityProtocolClientSideTranslatorPB.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocolPB/SCMSecurityProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMSecurityProtocolServer.java
* (edit) hadoop-hdds/common/src/main/proto/SCMSecurityProtocol.proto
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/protocol/SCMSecurityProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/BaseInsightSubCommand.java
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/scm/ScmProtocolSecurityInsight.java


> Make SCMSecurityProtocol message based
> --
>
> Key: HDDS-2073
> URL: https://issues.apache.org/jira/browse/HDDS-2073
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We started to use a generic pattern where we have only one method in the grpc 
> service and the main message contains all the required common information 
> (eg. tracing).
> SCMSecurityProtocol.proto is not yet migrated to this approach. To make our 
> generic debug tool more powerful and unify our protocols I suggest to 
> transform this protocol as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >