[jira] [Assigned] (HDFS-14830) The calculation of DataXceiver count is not accurate

2019-09-06 Thread Chen Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang reassigned HDFS-14830:
-

Assignee: Chen Zhang

> The calculation of DataXceiver count is not accurate
> 
>
> Key: HDFS-14830
> URL: https://issues.apache.org/jira/browse/HDFS-14830
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
>
> DataNode use threadGroup.activeCount() as the number of DataXceiver, it's not 
> accurate since threadGroup includes not only DataXceiver thread and 
> DataXceiverServer thread, PacketResponder thread and BlockRecoveryWorker 
> thread is also in the same threadGroup.
> In the worst case, the reported DataXceiver count maybe double of actual 
> count(e.g. all DataXceiver process write block operation, they create same 
> number of PacketResponder thread at the same time).



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14830) The calculation of DataXceiver count is not accurate

2019-09-06 Thread Chen Zhang (Jira)
Chen Zhang created HDFS-14830:
-

 Summary: The calculation of DataXceiver count is not accurate
 Key: HDFS-14830
 URL: https://issues.apache.org/jira/browse/HDFS-14830
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Chen Zhang


DataNode use threadGroup.activeCount() as the number of DataXceiver, it's not 
accurate since threadGroup includes not only DataXceiver thread and 
DataXceiverServer thread, PacketResponder thread and BlockRecoveryWorker thread 
is also in the same threadGroup.

In the worst case, the reported DataXceiver count maybe double of actual 
count(e.g. all DataXceiver process write block operation, they create same 
number of PacketResponder thread at the same time).



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2100) Ozone TokenRenewer provider is incorrectly configured

2019-09-06 Thread Jitendra Nath Pandey (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-2100:
---
Status: Patch Available  (was: Open)

> Ozone TokenRenewer provider is incorrectly configured
> -
>
> Key: HDDS-2100
> URL: https://issues.apache.org/jira/browse/HDDS-2100
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Jitendra Nath Pandey
>Priority: Blocker
> Attachments: HDDS-2100.1.patch
>
>
> {{hadoop-ozone/ozonefs/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer}}
>  contains {{org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer}}.
> The right renewer class is 
> {{org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl$Renewer}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2100) Ozone TokenRenewer provider is incorrectly configured

2019-09-06 Thread Jitendra Nath Pandey (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-2100:
---
Attachment: HDDS-2100.1.patch

> Ozone TokenRenewer provider is incorrectly configured
> -
>
> Key: HDDS-2100
> URL: https://issues.apache.org/jira/browse/HDDS-2100
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Jitendra Nath Pandey
>Priority: Blocker
> Attachments: HDDS-2100.1.patch
>
>
> {{hadoop-ozone/ozonefs/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer}}
>  contains {{org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer}}.
> The right renewer class is 
> {{org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl$Renewer}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2101) Ozone filesystem provider doesn't exist

2019-09-06 Thread Jitendra Nath Pandey (Jira)
Jitendra Nath Pandey created HDDS-2101:
--

 Summary: Ozone filesystem provider doesn't exist
 Key: HDDS-2101
 URL: https://issues.apache.org/jira/browse/HDDS-2101
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Reporter: Jitendra Nath Pandey


We don't have a filesystem provider in META-INF. 
i.e. following file doesn't exist.
{{hadoop-ozone/ozonefs/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}

See for example
{{hadoop-tools/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14609) RBF: Security should use common AuthenticationFilter

2019-09-06 Thread Chen Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924717#comment-16924717
 ] 

Chen Zhang commented on HDFS-14609:
---

[~crh] [~elgoiri], do you have time to help to review the last patch?

> RBF: Security should use common AuthenticationFilter
> 
>
> Key: HDFS-14609
> URL: https://issues.apache.org/jira/browse/HDFS-14609
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: CR Hota
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14609.001.patch, HDFS-14609.002.patch, 
> HDFS-14609.003.patch, HDFS-14609.004.patch
>
>
> We worked on router based federation security as part of HDFS-13532. We kept 
> it compatible with the way namenode works. However with HADOOP-16314 and 
> HDFS-16354 in trunk, auth filters seems to have been changed causing tests to 
> fail.
> Changes are needed appropriately in RBF, mainly fixing broken tests.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2100) Ozone TokenRenewer provider is incorrectly configured

2019-09-06 Thread Jitendra Nath Pandey (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-2100:
---
Target Version/s: 0.4.1

> Ozone TokenRenewer provider is incorrectly configured
> -
>
> Key: HDDS-2100
> URL: https://issues.apache.org/jira/browse/HDDS-2100
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Jitendra Nath Pandey
>Priority: Blocker
>
> {{hadoop-ozone/ozonefs/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer}}
>  contains {{org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer}}.
> The right renewer class is 
> {{org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl$Renewer}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2100) Ozone TokenRenewer provider is incorrectly configured

2019-09-06 Thread Jitendra Nath Pandey (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-2100:
---
Affects Version/s: 0.4.0

> Ozone TokenRenewer provider is incorrectly configured
> -
>
> Key: HDDS-2100
> URL: https://issues.apache.org/jira/browse/HDDS-2100
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Jitendra Nath Pandey
>Priority: Blocker
>
> {{hadoop-ozone/ozonefs/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer}}
>  contains {{org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer}}.
> The right renewer class is 
> {{org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl$Renewer}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2100) Ozone TokenRenewer provider is incorrectly configured

2019-09-06 Thread Jitendra Nath Pandey (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-2100:
---
Description: 
{{hadoop-ozone/ozonefs/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer}}
 contains {{org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer}}.

The right renewer class is 
{{org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl$Renewer}}

> Ozone TokenRenewer provider is incorrectly configured
> -
>
> Key: HDDS-2100
> URL: https://issues.apache.org/jira/browse/HDDS-2100
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Jitendra Nath Pandey
>Priority: Blocker
>
> {{hadoop-ozone/ozonefs/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer}}
>  contains {{org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer}}.
> The right renewer class is 
> {{org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl$Renewer}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2100) Ozone TokenRenewer provider is incorrectly configured

2019-09-06 Thread Jitendra Nath Pandey (Jira)
Jitendra Nath Pandey created HDDS-2100:
--

 Summary: Ozone TokenRenewer provider is incorrectly configured
 Key: HDDS-2100
 URL: https://issues.apache.org/jira/browse/HDDS-2100
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Jitendra Nath Pandey






--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14528) Failover from Active to Standby Failed

2019-09-06 Thread Ravuri Sushma sree (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravuri Sushma sree updated HDFS-14528:
--
Attachment: HDFS-14528.004.patch

> Failover from Active to Standby Failed  
> 
>
> Key: HDFS-14528
> URL: https://issues.apache.org/jira/browse/HDFS-14528
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Reporter: Ravuri Sushma sree
>Assignee: Ravuri Sushma sree
>Priority: Major
> Attachments: HDFS-14528.003.patch, HDFS-14528.004.patch, 
> HDFS-14528.2.Patch, ZKFC_issue.patch
>
>
>  *In a cluster with more than one Standby namenode, manual failover throws 
> exception for some cases*
> *When trying to exectue the failover command from active to standby* 
> *._/hdfs haadmin  -failover nn1 nn2, below Exception is thrown_*
>   Operation failed: Call From X-X-X-X/X-X-X-X to Y-Y-Y-Y: failed on 
> connection exception: java.net.ConnectException: Connection refused
> This is encountered in the following cases :
>  Scenario 1 : 
> Namenodes - NN1(Active) , NN2(Standby), NN3(Standby)
> When trying to manually failover from NN1 to NN2 if NN3 is down, Exception is 
> thrown
> Scenario 2 :
>  Namenodes - NN1(Active) , NN2(Standby), NN3(Standby)
> ZKFC's -              ZKFC1,            ZKFC2,            ZKFC3
> When trying to manually failover using NN1 to NN3 if NN3's ZKFC (ZKFC3) is 
> down, Exception is thrown



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14795) Add Throttler for writing block

2019-09-06 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924697#comment-16924697
 ] 

Lisheng Sun commented on HDFS-14795:


Thanks [~elgoiri] for good suggestions. I will update this patch later.

> Add Throttler for writing block
> ---
>
> Key: HDFS-14795
> URL: https://issues.apache.org/jira/browse/HDFS-14795
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14795.001.patch, HDFS-14795.002.patch
>
>
> DataXceiver#writeBlock
> {code:java}
> blockReceiver.receiveBlock(mirrorOut, mirrorIn, replyOut,
> mirrorAddr, null, targets, false);
> {code}
> As above code, DataXceiver#writeBlock doesn't throttler.
>  I think it is necessary to throttle for writing block, while add throttler 
> in stage of PIPELINE_SETUP_APPEND_RECOVERY or 
> PIPELINE_SETUP_STREAMING_RECOVERY.
> Default throttler value is still null.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14655) [SBN Read] Namenode crashes if one of The JN is down

2019-09-06 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924675#comment-16924675
 ] 

Ayush Saxena commented on HDFS-14655:
-

Thanx [~xkrogen], Have uploaded v3 with said change.

> [SBN Read] Namenode crashes if one of The JN is down
> 
>
> Key: HDFS-14655
> URL: https://issues.apache.org/jira/browse/HDFS-14655
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Harshakiran Reddy
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HDFS-14655-01.patch, HDFS-14655-02.patch, 
> HDFS-14655-03.patch, HDFS-14655.poc.patch
>
>
> {noformat}
> 2019-07-04 17:35:54,064 | INFO  | Logger channel (from parallel executor) to 
> XXX/XXX | Retrying connect to server: XXX/XXX. Already tried 
> 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
> sleepTime=1000 MILLISECONDS) | Client.java:975
> 2019-07-04 17:35:54,087 | FATAL | Edit log tailer | Unknown error encountered 
> while tailing edits. Shutting down standby NN. | EditLogTailer.java:474
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:717)
>   at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
>   at 
> com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:440)
>   at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.getJournaledEdits(IPCLoggerChannel.java:565)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournaledEdits(AsyncLoggerSet.java:272)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectRpcInputStreams(QuorumJournalManager.java:533)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:508)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:275)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1681)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1714)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:307)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:483)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> 2019-07-04 17:35:54,112 | INFO  | Edit log tailer | Exiting with status 1: 
> java.lang.OutOfMemoryError: unable to create new native thread | 
> ExitUtil.java:210
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1553) Add metrics in rack aware container placement policy

2019-09-06 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDDS-1553.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

Thanks [~Sammi] for the contribution. I merged the change to trunk.

> Add metrics in rack aware container placement policy
> 
>
> Key: HDDS-1553
> URL: https://issues.apache.org/jira/browse/HDDS-1553
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> To collect following statistics, 
> 1. total requested datanode count (A)
> 2. success allocated datanode count without constrain compromise (B)
> 3. success allocated datanode count with some comstrain compromise (C)
> B includes C, failed allocation = (A - B)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1553) Add metrics in rack aware container placement policy

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1553?focusedWorklogId=308229=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308229
 ]

ASF GitHub Bot logged work on HDDS-1553:


Author: ASF GitHub Bot
Created on: 07/Sep/19 00:12
Start Date: 07/Sep/19 00:12
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #1361: HDDS-1553. Add 
metrics in rack aware container placement policy.
URL: https://github.com/apache/hadoop/pull/1361#issuecomment-529052232
 
 
   Thanks @ChenSammi  for the contribution. +1 for the latest change, I merged 
the change to trunk. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308229)
Time Spent: 3h 20m  (was: 3h 10m)

> Add metrics in rack aware container placement policy
> 
>
> Key: HDDS-1553
> URL: https://issues.apache.org/jira/browse/HDDS-1553
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> To collect following statistics, 
> 1. total requested datanode count (A)
> 2. success allocated datanode count without constrain compromise (B)
> 3. success allocated datanode count with some comstrain compromise (C)
> B includes C, failed allocation = (A - B)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1553) Add metrics in rack aware container placement policy

2019-09-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924673#comment-16924673
 ] 

Hudson commented on HDDS-1553:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17250 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17250/])
HDDS-1553. Add metrics in rack aware container placement policy. (#1361) (xyao: 
rev c46d43ab138752459c67575055ab2b63da822152)
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementRackAware.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMContainerPlacementCapacity.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/ContainerPlacementPolicyFactory.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMContainerPlacementMetrics.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementCapacity.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementRandom.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMContainerPlacementRackAware.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/placement/TestContainerPlacement.java
* (add) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMContainerPlacementPolicyMetrics.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestContainerPlacementFactory.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMContainerPlacementRandom.java


> Add metrics in rack aware container placement policy
> 
>
> Key: HDDS-1553
> URL: https://issues.apache.org/jira/browse/HDDS-1553
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> To collect following statistics, 
> 1. total requested datanode count (A)
> 2. success allocated datanode count without constrain compromise (B)
> 3. success allocated datanode count with some comstrain compromise (C)
> B includes C, failed allocation = (A - B)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1553) Add metrics in rack aware container placement policy

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1553?focusedWorklogId=308228=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308228
 ]

ASF GitHub Bot logged work on HDDS-1553:


Author: ASF GitHub Bot
Created on: 07/Sep/19 00:11
Start Date: 07/Sep/19 00:11
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1361: HDDS-1553. 
Add metrics in rack aware container placement policy.
URL: https://github.com/apache/hadoop/pull/1361
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308228)
Time Spent: 3h 10m  (was: 3h)

> Add metrics in rack aware container placement policy
> 
>
> Key: HDDS-1553
> URL: https://issues.apache.org/jira/browse/HDDS-1553
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> To collect following statistics, 
> 1. total requested datanode count (A)
> 2. success allocated datanode count without constrain compromise (B)
> 3. success allocated datanode count with some comstrain compromise (C)
> B includes C, failed allocation = (A - B)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14655) [SBN Read] Namenode crashes if one of The JN is down

2019-09-06 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14655:

Attachment: HDFS-14655-03.patch

> [SBN Read] Namenode crashes if one of The JN is down
> 
>
> Key: HDFS-14655
> URL: https://issues.apache.org/jira/browse/HDFS-14655
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Harshakiran Reddy
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HDFS-14655-01.patch, HDFS-14655-02.patch, 
> HDFS-14655-03.patch, HDFS-14655.poc.patch
>
>
> {noformat}
> 2019-07-04 17:35:54,064 | INFO  | Logger channel (from parallel executor) to 
> XXX/XXX | Retrying connect to server: XXX/XXX. Already tried 
> 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
> sleepTime=1000 MILLISECONDS) | Client.java:975
> 2019-07-04 17:35:54,087 | FATAL | Edit log tailer | Unknown error encountered 
> while tailing edits. Shutting down standby NN. | EditLogTailer.java:474
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:717)
>   at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
>   at 
> com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:440)
>   at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.getJournaledEdits(IPCLoggerChannel.java:565)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournaledEdits(AsyncLoggerSet.java:272)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectRpcInputStreams(QuorumJournalManager.java:533)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:508)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:275)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1681)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1714)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:307)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:483)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> 2019-07-04 17:35:54,112 | INFO  | Edit log tailer | Exiting with status 1: 
> java.lang.OutOfMemoryError: unable to create new native thread | 
> ExitUtil.java:210
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2099) Refactor to create pipeline via DN heartbeat response

2019-09-06 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HDDS-2099:


 Summary: Refactor to create pipeline via DN heartbeat response
 Key: HDDS-2099
 URL: https://issues.apache.org/jira/browse/HDDS-2099
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao


Currently, SCM directly talk to DN GRPC server to create pipeline in a 
background thread. We should avoid direct communication from SCM to DN for 
better scalability of ozone. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2098) Ozone shell command prints out ERROR when the log4j file is not present.

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2098?focusedWorklogId=308207=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308207
 ]

ASF GitHub Bot logged work on HDDS-2098:


Author: ASF GitHub Bot
Created on: 06/Sep/19 22:52
Start Date: 06/Sep/19 22:52
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1411: HDDS-2098 : 
Ozone shell command prints out ERROR when the log4j file …
URL: https://github.com/apache/hadoop/pull/1411#issuecomment-529039404
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 71 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 605 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 880 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 565 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 32 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 768 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 108 | hadoop-hdds in the patch passed. |
   | +1 | unit | 290 | hadoop-ozone in the patch passed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 3560 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1411/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1411 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs |
   | uname | Linux 075bdf979e4e 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / bb0b922 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1411/2/testReport/ |
   | Max. process+thread count | 310 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common U: hadoop-ozone/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1411/2/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308207)
Time Spent: 50m  (was: 40m)

> Ozone shell command prints out ERROR when the log4j file is not present.
> 
>
> Key: HDDS-2098
> URL: https://issues.apache.org/jira/browse/HDDS-2098
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> *Exception Trace*
> {code}
> log4j:ERROR Could not read configuration file from URL 
> [file:/etc/ozone/conf/ozone-shell-log4j.properties].
> java.io.FileNotFoundException: /etc/ozone/conf/ozone-shell-log4j.properties 
> (No such file or directory)
>   at java.io.FileInputStream.open0(Native Method)
>   at java.io.FileInputStream.open(FileInputStream.java:195)
>   at java.io.FileInputStream.(FileInputStream.java:138)
>   at java.io.FileInputStream.(FileInputStream.java:93)
>   at 
> sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
>   at 
> sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
>   at 
> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557)
>   

[jira] [Commented] (HDFS-14655) [SBN Read] Namenode crashes if one of The JN is down

2019-09-06 Thread Erik Krogen (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924650#comment-16924650
 ] 

Erik Krogen commented on HDFS-14655:


v2 looks good to me except that I don't think {{newFixedThreadPool}} is the 
right approach. This will create 5 threads per JN even if only 1 is ever used 
since it sets the core pool size equal to the maximum pool size:
{code:title=HadoopExecutors}
  public static ExecutorService newFixedThreadPool(int nThreads,
  ThreadFactory threadFactory) {
return new HadoopThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue(),
threadFactory);
  }
{code}
Instead, what we want is:
{code}
new HadoopThreadPoolExecutor(1, numThreads, 1L, TimeUnit.MINUTES, new 
LinkedBlockingQueue(), threadFactory);
{code}
This gives a core pool size of 1 thread, so there will always be 1 running for 
use. More can be spawned as-needed, but only up to {{numThreads}} at maximum.

> [SBN Read] Namenode crashes if one of The JN is down
> 
>
> Key: HDFS-14655
> URL: https://issues.apache.org/jira/browse/HDFS-14655
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Harshakiran Reddy
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HDFS-14655-01.patch, HDFS-14655-02.patch, 
> HDFS-14655.poc.patch
>
>
> {noformat}
> 2019-07-04 17:35:54,064 | INFO  | Logger channel (from parallel executor) to 
> XXX/XXX | Retrying connect to server: XXX/XXX. Already tried 
> 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
> sleepTime=1000 MILLISECONDS) | Client.java:975
> 2019-07-04 17:35:54,087 | FATAL | Edit log tailer | Unknown error encountered 
> while tailing edits. Shutting down standby NN. | EditLogTailer.java:474
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:717)
>   at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
>   at 
> com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:440)
>   at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.getJournaledEdits(IPCLoggerChannel.java:565)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournaledEdits(AsyncLoggerSet.java:272)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectRpcInputStreams(QuorumJournalManager.java:533)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:508)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:275)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1681)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1714)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:307)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:483)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> 2019-07-04 17:35:54,112 | INFO  | Edit log tailer | Exiting with status 1: 
> java.lang.OutOfMemoryError: unable to create new native thread | 
> ExitUtil.java:210
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2087) Remove the hard coded config key in ChunkManager

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2087?focusedWorklogId=308200=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308200
 ]

ASF GitHub Bot logged work on HDDS-2087:


Author: ASF GitHub Bot
Created on: 06/Sep/19 22:26
Start Date: 06/Sep/19 22:26
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #1409: HDDS-2087. 
Remove the hard coded config key in ChunkManager
URL: https://github.com/apache/hadoop/pull/1409#issuecomment-529034433
 
 
   The unit and integration test failures are not related to the patch. 
@bharatviswa504 @anuengineer Thanks for your reviews!
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308200)
Time Spent: 1h 20m  (was: 1h 10m)

> Remove the hard coded config key in ChunkManager
> 
>
> Key: HDDS-2087
> URL: https://issues.apache.org/jira/browse/HDDS-2087
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Anu Engineer
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> We have a hard-coded config key in the {{ChunkManagerFactory.java.}}
>  
> {code}
> boolean scrubber = config.getBoolean(
>  "hdds.containerscrub.enabled",
>  false);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2087) Remove the hard coded config key in ChunkManager

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2087?focusedWorklogId=308194=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308194
 ]

ASF GitHub Bot logged work on HDDS-2087:


Author: ASF GitHub Bot
Created on: 06/Sep/19 22:12
Start Date: 06/Sep/19 22:12
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1409: HDDS-2087. 
Remove the hard coded config key in ChunkManager
URL: https://github.com/apache/hadoop/pull/1409#issuecomment-529031436
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for branch |
   | +1 | mvninstall | 631 | trunk passed |
   | +1 | compile | 395 | trunk passed |
   | +1 | checkstyle | 82 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 874 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 182 | trunk passed |
   | 0 | spotbugs | 468 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 681 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 36 | Maven dependency ordering for patch |
   | +1 | mvninstall | 564 | the patch passed |
   | +1 | compile | 390 | the patch passed |
   | +1 | javac | 390 | the patch passed |
   | +1 | checkstyle | 83 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 707 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 188 | the patch passed |
   | +1 | findbugs | 663 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 287 | hadoop-hdds in the patch passed. |
   | -1 | unit | 186 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 6246 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1409/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1409 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 7260e9bf8f51 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b15c116 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1409/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1409/3/testReport/ |
   | Max. process+thread count | 1167 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1409/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308194)
Time Spent: 1h 10m  (was: 1h)

> Remove the hard coded config key in ChunkManager
> 
>
> Key: HDDS-2087
> URL: https://issues.apache.org/jira/browse/HDDS-2087
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Anu Engineer
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We have a hard-coded config key in the {{ChunkManagerFactory.java.}}
>  
> {code}
> boolean scrubber = config.getBoolean(
>  

[jira] [Work logged] (HDDS-2007) Make ozone fs shell command work with OM HA service ids

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2007?focusedWorklogId=308193=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308193
 ]

ASF GitHub Bot logged work on HDDS-2007:


Author: ASF GitHub Bot
Created on: 06/Sep/19 22:11
Start Date: 06/Sep/19 22:11
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #1360: HDDS-2007. 
Make ozone fs shell command work with OM HA service ids 
URL: https://github.com/apache/hadoop/pull/1360#discussion_r321927084
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -137,12 +137,22 @@
   private Text dtService;
   private final boolean topologyAwareReadEnabled;
 
+  /**
+   * Creates RpcClient instance with the given configuration.
+   * @param conf Configuration
+   * @throws IOException
+   */
+  public RpcClient(Configuration conf) throws IOException {
 
 Review comment:
   Sure let's go with removing this old constructor then.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308193)
Time Spent: 3h 40m  (was: 3.5h)

> Make ozone fs shell command work with OM HA service ids
> ---
>
> Key: HDDS-2007
> URL: https://issues.apache.org/jira/browse/HDDS-2007
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Client
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Build an HDFS HA-like nameservice for OM HA so that Ozone client can access 
> Ozone HA cluster with ease.
> The majority of the work is already done in HDDS-972. But the problem is that 
> the client would crash if there are more than one service ids 
> (ozone.om.service.ids) configured in ozone-site.xml. This need to be address 
> on client side.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14655) [SBN Read] Namenode crashes if one of The JN is down

2019-09-06 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924632#comment-16924632
 ] 

Ayush Saxena commented on HDFS-14655:
-

Thanx [~vagarychen] [~xkrogen] for the discussion. I agree in general as Chen 
mentioned parallelism will not pitch in general, as we are also canceling the 
threads too. But still in extreme cases, there might be chances where it may be 
usefull. I have uploaded v2 with a config. Using the {{newFixedThreadPool}}. 
Give a check.

> [SBN Read] Namenode crashes if one of The JN is down
> 
>
> Key: HDFS-14655
> URL: https://issues.apache.org/jira/browse/HDFS-14655
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Harshakiran Reddy
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14655-01.patch, HDFS-14655-02.patch, 
> HDFS-14655.poc.patch
>
>
> {noformat}
> 2019-07-04 17:35:54,064 | INFO  | Logger channel (from parallel executor) to 
> XXX/XXX | Retrying connect to server: XXX/XXX. Already tried 
> 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
> sleepTime=1000 MILLISECONDS) | Client.java:975
> 2019-07-04 17:35:54,087 | FATAL | Edit log tailer | Unknown error encountered 
> while tailing edits. Shutting down standby NN. | EditLogTailer.java:474
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:717)
>   at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
>   at 
> com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:440)
>   at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.getJournaledEdits(IPCLoggerChannel.java:565)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournaledEdits(AsyncLoggerSet.java:272)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectRpcInputStreams(QuorumJournalManager.java:533)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:508)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:275)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1681)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1714)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:307)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:483)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> 2019-07-04 17:35:54,112 | INFO  | Edit log tailer | Exiting with status 1: 
> java.lang.OutOfMemoryError: unable to create new native thread | 
> ExitUtil.java:210
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14822) [SBN read] Revisit GlobalStateIdContext locking when getting server state id

2019-09-06 Thread Chao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924633#comment-16924633
 ] 

Chao Sun commented on HDFS-14822:
-

Thanks [~vagarychen]. Yes, this looks OK to me as well. Seems 3) is most 
relevant here. By reading the code, it looks to me that the {{FSEditLog#txnId}} 
is updated as part of a write operation, which means in the case of 3rd party 
communication, a client who wants to issue a read call after receiving a 
message from a client that just completed a write operation should be able to 
do that by fetching the txnId from ANN first without locking. 

> [SBN read] Revisit GlobalStateIdContext locking when getting server state id
> 
>
> Key: HDFS-14822
> URL: https://issues.apache.org/jira/browse/HDFS-14822
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
>  Labels: release-blocker
> Attachments: HDFS-14822.001.patch
>
>
> As mentioned under HDFS-14277. One potential performance issue of Observer 
> read is that {{GlobalStateIdContext#getLastSeenStateId}} calls 
> getCorrectLastAppliedOrWrittenTxId which ends up acquiring lock on txnid. We 
> internally had some discussion and analysis, we believe this lock can be 
> avoided, by calling the non-locking version method 
> {{getLastAppliedOrWrittenTxId.}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14655) [SBN Read] Namenode crashes if one of The JN is down

2019-09-06 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14655:

Priority: Critical  (was: Major)

> [SBN Read] Namenode crashes if one of The JN is down
> 
>
> Key: HDFS-14655
> URL: https://issues.apache.org/jira/browse/HDFS-14655
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Harshakiran Reddy
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HDFS-14655-01.patch, HDFS-14655-02.patch, 
> HDFS-14655.poc.patch
>
>
> {noformat}
> 2019-07-04 17:35:54,064 | INFO  | Logger channel (from parallel executor) to 
> XXX/XXX | Retrying connect to server: XXX/XXX. Already tried 
> 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
> sleepTime=1000 MILLISECONDS) | Client.java:975
> 2019-07-04 17:35:54,087 | FATAL | Edit log tailer | Unknown error encountered 
> while tailing edits. Shutting down standby NN. | EditLogTailer.java:474
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:717)
>   at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
>   at 
> com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:440)
>   at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.getJournaledEdits(IPCLoggerChannel.java:565)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournaledEdits(AsyncLoggerSet.java:272)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectRpcInputStreams(QuorumJournalManager.java:533)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:508)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:275)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1681)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1714)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:307)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:483)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> 2019-07-04 17:35:54,112 | INFO  | Edit log tailer | Exiting with status 1: 
> java.lang.OutOfMemoryError: unable to create new native thread | 
> ExitUtil.java:210
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14655) [SBN Read] Namenode crashes if one of The JN is down

2019-09-06 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14655:

Affects Version/s: 3.3.0

> [SBN Read] Namenode crashes if one of The JN is down
> 
>
> Key: HDFS-14655
> URL: https://issues.apache.org/jira/browse/HDFS-14655
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Harshakiran Reddy
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14655-01.patch, HDFS-14655-02.patch, 
> HDFS-14655.poc.patch
>
>
> {noformat}
> 2019-07-04 17:35:54,064 | INFO  | Logger channel (from parallel executor) to 
> XXX/XXX | Retrying connect to server: XXX/XXX. Already tried 
> 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
> sleepTime=1000 MILLISECONDS) | Client.java:975
> 2019-07-04 17:35:54,087 | FATAL | Edit log tailer | Unknown error encountered 
> while tailing edits. Shutting down standby NN. | EditLogTailer.java:474
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:717)
>   at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
>   at 
> com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:440)
>   at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.getJournaledEdits(IPCLoggerChannel.java:565)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournaledEdits(AsyncLoggerSet.java:272)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectRpcInputStreams(QuorumJournalManager.java:533)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:508)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:275)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1681)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1714)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:307)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:483)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> 2019-07-04 17:35:54,112 | INFO  | Edit log tailer | Exiting with status 1: 
> java.lang.OutOfMemoryError: unable to create new native thread | 
> ExitUtil.java:210
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2007) Make ozone fs shell command work with OM HA service ids

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2007?focusedWorklogId=308191=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308191
 ]

ASF GitHub Bot logged work on HDDS-2007:


Author: ASF GitHub Bot
Created on: 06/Sep/19 22:06
Start Date: 06/Sep/19 22:06
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1360: 
HDDS-2007. Make ozone fs shell command work with OM HA service ids  
URL: https://github.com/apache/hadoop/pull/1360#discussion_r321926066
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocolPB/OzoneManagerProtocolClientSideTranslatorPB.java
 ##
 @@ -214,6 +216,11 @@ public 
OzoneManagerProtocolClientSideTranslatorPB(OzoneConfiguration conf,
 this.clientID = clientId;
   }
 
+  public OzoneManagerProtocolClientSideTranslatorPB(OzoneConfiguration conf,
 
 Review comment:
   Yes if possible.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308191)
Time Spent: 3.5h  (was: 3h 20m)

> Make ozone fs shell command work with OM HA service ids
> ---
>
> Key: HDDS-2007
> URL: https://issues.apache.org/jira/browse/HDDS-2007
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Client
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Build an HDFS HA-like nameservice for OM HA so that Ozone client can access 
> Ozone HA cluster with ease.
> The majority of the work is already done in HDDS-972. But the problem is that 
> the client would crash if there are more than one service ids 
> (ozone.om.service.ids) configured in ozone-site.xml. This need to be address 
> on client side.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2007) Make ozone fs shell command work with OM HA service ids

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2007?focusedWorklogId=308190=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308190
 ]

ASF GitHub Bot logged work on HDDS-2007:


Author: ASF GitHub Bot
Created on: 06/Sep/19 22:05
Start Date: 06/Sep/19 22:05
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1360: 
HDDS-2007. Make ozone fs shell command work with OM HA service ids  
URL: https://github.com/apache/hadoop/pull/1360#discussion_r321925879
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -137,12 +137,22 @@
   private Text dtService;
   private final boolean topologyAwareReadEnabled;
 
+  /**
+   * Creates RpcClient instance with the given configuration.
+   * @param conf Configuration
+   * @throws IOException
+   */
+  public RpcClient(Configuration conf) throws IOException {
 
 Review comment:
   My comment is for add to a notion of @VisibleForTesting, and I think this 
can also be removed, as it is used only for testing. And it can be completely 
removed, and use the new constructor.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308190)
Time Spent: 3h 20m  (was: 3h 10m)

> Make ozone fs shell command work with OM HA service ids
> ---
>
> Key: HDDS-2007
> URL: https://issues.apache.org/jira/browse/HDDS-2007
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Client
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Build an HDFS HA-like nameservice for OM HA so that Ozone client can access 
> Ozone HA cluster with ease.
> The majority of the work is already done in HDDS-972. But the problem is that 
> the client would crash if there are more than one service ids 
> (ozone.om.service.ids) configured in ozone-site.xml. This need to be address 
> on client side.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14655) [SBN Read] Namenode crashes if one of The JN is down

2019-09-06 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14655:

Attachment: HDFS-14655-02.patch

> [SBN Read] Namenode crashes if one of The JN is down
> 
>
> Key: HDFS-14655
> URL: https://issues.apache.org/jira/browse/HDFS-14655
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Harshakiran Reddy
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14655-01.patch, HDFS-14655-02.patch, 
> HDFS-14655.poc.patch
>
>
> {noformat}
> 2019-07-04 17:35:54,064 | INFO  | Logger channel (from parallel executor) to 
> XXX/XXX | Retrying connect to server: XXX/XXX. Already tried 
> 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
> sleepTime=1000 MILLISECONDS) | Client.java:975
> 2019-07-04 17:35:54,087 | FATAL | Edit log tailer | Unknown error encountered 
> while tailing edits. Shutting down standby NN. | EditLogTailer.java:474
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:717)
>   at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
>   at 
> com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:440)
>   at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.getJournaledEdits(IPCLoggerChannel.java:565)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournaledEdits(AsyncLoggerSet.java:272)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectRpcInputStreams(QuorumJournalManager.java:533)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:508)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:275)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1681)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1714)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:307)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:483)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> 2019-07-04 17:35:54,112 | INFO  | Edit log tailer | Exiting with status 1: 
> java.lang.OutOfMemoryError: unable to create new native thread | 
> ExitUtil.java:210
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12831) HDFS throws FileNotFoundException on getFileBlockLocations(path-to-directory)

2019-09-06 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924626#comment-16924626
 ] 

Hadoop QA commented on HDFS-12831:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 46s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 37 unchanged - 1 fixed = 38 total (was 38) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}188m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.contract.hdfs.TestHDFSContractOpen |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
|   | hadoop.hdfs.server.namenode.TestINodeFile |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyInProgressTail |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.TestReconstructStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-12831 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979703/HDFS-12831.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6e09b9a91e66 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b71a7f1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| 

[jira] [Work logged] (HDDS-2007) Make ozone fs shell command work with OM HA service ids

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2007?focusedWorklogId=308174=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308174
 ]

ASF GitHub Bot logged work on HDDS-2007:


Author: ASF GitHub Bot
Created on: 06/Sep/19 21:29
Start Date: 06/Sep/19 21:29
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #1360: HDDS-2007. 
Make ozone fs shell command work with OM HA service ids 
URL: https://github.com/apache/hadoop/pull/1360#discussion_r321917093
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocolPB/OzoneManagerProtocolClientSideTranslatorPB.java
 ##
 @@ -214,6 +216,11 @@ public 
OzoneManagerProtocolClientSideTranslatorPB(OzoneConfiguration conf,
 this.clientID = clientId;
   }
 
+  public OzoneManagerProtocolClientSideTranslatorPB(OzoneConfiguration conf,
 
 Review comment:
   @bharatviswa504 It turns out there is one caller here: 
https://github.com/apache/hadoop/blob/d69a1a0aa49614c084fa4b9546ace65aebe4/hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/ReconControllerModule.java#L102
   
   But we can easily change it to use the new constructor. Shall we do that?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308174)
Time Spent: 3h 10m  (was: 3h)

> Make ozone fs shell command work with OM HA service ids
> ---
>
> Key: HDDS-2007
> URL: https://issues.apache.org/jira/browse/HDDS-2007
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Client
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Build an HDFS HA-like nameservice for OM HA so that Ozone client can access 
> Ozone HA cluster with ease.
> The majority of the work is already done in HDDS-972. But the problem is that 
> the client would crash if there are more than one service ids 
> (ozone.om.service.ids) configured in ozone-site.xml. This need to be address 
> on client side.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2007) Make ozone fs shell command work with OM HA service ids

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2007?focusedWorklogId=308171=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308171
 ]

ASF GitHub Bot logged work on HDDS-2007:


Author: ASF GitHub Bot
Created on: 06/Sep/19 21:28
Start Date: 06/Sep/19 21:28
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #1360: HDDS-2007. 
Make ozone fs shell command work with OM HA service ids 
URL: https://github.com/apache/hadoop/pull/1360#discussion_r321916578
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/ha/OMFailoverProxyProvider.java
 ##
 @@ -70,26 +71,46 @@
   private final UserGroupInformation ugi;
   private final Text delegationTokenService;
 
+  // TODO: Do we want this to be final?
+  private String omServiceId;
+
   public OMFailoverProxyProvider(OzoneConfiguration configuration,
-  UserGroupInformation ugi) throws IOException {
+  UserGroupInformation ugi, String omServiceId) throws IOException {
 this.conf = configuration;
 this.omVersion = RPC.getProtocolVersion(OzoneManagerProtocolPB.class);
 this.ugi = ugi;
-loadOMClientConfigs(conf);
+this.omServiceId = omServiceId;
+loadOMClientConfigs(conf, this.omServiceId);
 this.delegationTokenService = computeDelegationTokenService();
 
 currentProxyIndex = 0;
 currentProxyOMNodeId = omNodeIDList.get(currentProxyIndex);
   }
 
-  private void loadOMClientConfigs(Configuration config) throws IOException {
+  public OMFailoverProxyProvider(OzoneConfiguration configuration,
+  UserGroupInformation ugi) throws IOException {
+this(configuration, ugi, null);
+  }
+
+  private void loadOMClientConfigs(Configuration config, String omSvcId)
+  throws IOException {
 this.omProxies = new HashMap<>();
 this.omProxyInfos = new HashMap<>();
 this.omNodeIDList = new ArrayList<>();
 
-Collection omServiceIds = config.getTrimmedStringCollection(
-OZONE_OM_SERVICE_IDS_KEY);
+Collection omServiceIds;
+if (omSvcId == null) {
+  // When no OM service id is passed in
+  // Note: this branch will only be followed when omSvcId is null,
+  // meaning the host name/service id provided by user doesn't match any
+  // ozone.om.service.ids on the client side. Therefore, in this case
+  // just treat it as non-HA by assigning an empty list to omServiceIds
+  omServiceIds = new ArrayList<>();
+} else {
+  omServiceIds = Collections.singletonList(omSvcId);
+}
 
+// TODO: Remove this warning? Or change the message?
 
 Review comment:
   Done.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308171)
Time Spent: 3h  (was: 2h 50m)

> Make ozone fs shell command work with OM HA service ids
> ---
>
> Key: HDDS-2007
> URL: https://issues.apache.org/jira/browse/HDDS-2007
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Client
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Build an HDFS HA-like nameservice for OM HA so that Ozone client can access 
> Ozone HA cluster with ease.
> The majority of the work is already done in HDDS-972. But the problem is that 
> the client would crash if there are more than one service ids 
> (ozone.om.service.ids) configured in ozone-site.xml. This need to be address 
> on client side.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2007) Make ozone fs shell command work with OM HA service ids

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2007?focusedWorklogId=308169=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308169
 ]

ASF GitHub Bot logged work on HDDS-2007:


Author: ASF GitHub Bot
Created on: 06/Sep/19 21:27
Start Date: 06/Sep/19 21:27
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #1360: HDDS-2007. 
Make ozone fs shell command work with OM HA service ids 
URL: https://github.com/apache/hadoop/pull/1360#discussion_r321916457
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/ha/OMFailoverProxyProvider.java
 ##
 @@ -70,26 +71,46 @@
   private final UserGroupInformation ugi;
   private final Text delegationTokenService;
 
+  // TODO: Do we want this to be final?
+  private String omServiceId;
+
   public OMFailoverProxyProvider(OzoneConfiguration configuration,
-  UserGroupInformation ugi) throws IOException {
+  UserGroupInformation ugi, String omServiceId) throws IOException {
 this.conf = configuration;
 this.omVersion = RPC.getProtocolVersion(OzoneManagerProtocolPB.class);
 this.ugi = ugi;
-loadOMClientConfigs(conf);
+this.omServiceId = omServiceId;
+loadOMClientConfigs(conf, this.omServiceId);
 this.delegationTokenService = computeDelegationTokenService();
 
 currentProxyIndex = 0;
 currentProxyOMNodeId = omNodeIDList.get(currentProxyIndex);
   }
 
-  private void loadOMClientConfigs(Configuration config) throws IOException {
+  public OMFailoverProxyProvider(OzoneConfiguration configuration,
+  UserGroupInformation ugi) throws IOException {
+this(configuration, ugi, null);
+  }
+
+  private void loadOMClientConfigs(Configuration config, String omSvcId)
+  throws IOException {
 this.omProxies = new HashMap<>();
 this.omProxyInfos = new HashMap<>();
 this.omNodeIDList = new ArrayList<>();
 
-Collection omServiceIds = config.getTrimmedStringCollection(
-OZONE_OM_SERVICE_IDS_KEY);
+Collection omServiceIds;
+if (omSvcId == null) {
 
 Review comment:
   @bharatviswa504 You are right. We can remove the condition here completely 
as it makes no difference.
   
   Will do the refactoring in a new jira.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308169)
Time Spent: 2h 50m  (was: 2h 40m)

> Make ozone fs shell command work with OM HA service ids
> ---
>
> Key: HDDS-2007
> URL: https://issues.apache.org/jira/browse/HDDS-2007
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Client
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Build an HDFS HA-like nameservice for OM HA so that Ozone client can access 
> Ozone HA cluster with ease.
> The majority of the work is already done in HDDS-972. But the problem is that 
> the client would crash if there are more than one service ids 
> (ozone.om.service.ids) configured in ozone-site.xml. This need to be address 
> on client side.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2007) Make ozone fs shell command work with OM HA service ids

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2007?focusedWorklogId=308167=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308167
 ]

ASF GitHub Bot logged work on HDDS-2007:


Author: ASF GitHub Bot
Created on: 06/Sep/19 21:24
Start Date: 06/Sep/19 21:24
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #1360: HDDS-2007. 
Make ozone fs shell command work with OM HA service ids 
URL: https://github.com/apache/hadoop/pull/1360#discussion_r321915663
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -137,12 +137,22 @@
   private Text dtService;
   private final boolean topologyAwareReadEnabled;
 
+  /**
+   * Creates RpcClient instance with the given configuration.
+   * @param conf Configuration
+   * @throws IOException
+   */
+  public RpcClient(Configuration conf) throws IOException {
 
 Review comment:
   @bharatviswa504 I believe the `VisibleForTesting` annotation doesn't hide 
the constructor from other code.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308167)
Time Spent: 2h 40m  (was: 2.5h)

> Make ozone fs shell command work with OM HA service ids
> ---
>
> Key: HDDS-2007
> URL: https://issues.apache.org/jira/browse/HDDS-2007
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Client
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Build an HDFS HA-like nameservice for OM HA so that Ozone client can access 
> Ozone HA cluster with ease.
> The majority of the work is already done in HDDS-972. But the problem is that 
> the client would crash if there are more than one service ids 
> (ozone.om.service.ids) configured in ozone-site.xml. This need to be address 
> on client side.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2015) Encrypt/decrypt key using symmetric key while writing/reading

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2015?focusedWorklogId=308163=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308163
 ]

ASF GitHub Bot logged work on HDDS-2015:


Author: ASF GitHub Bot
Created on: 06/Sep/19 21:17
Start Date: 06/Sep/19 21:17
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #1386: HDDS-2015. 
Encrypt/decrypt key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#issuecomment-529017454
 
 
   Thanks @bharatviswa504 , @ajayydv , @anuengineer for a great review!
   Thanks @anuengineer for commit.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308163)
Time Spent: 6h  (was: 5h 50m)

> Encrypt/decrypt key using symmetric key while writing/reading
> -
>
> Key: HDDS-2015
> URL: https://issues.apache.org/jira/browse/HDDS-2015
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> *Key Write Path (Encryption)*
> When a bucket metadata has gdprEnabled=true, we generate the GDPRSymmetricKey 
> and add it to Key Metadata before we create the Key.
> This ensures that key is encrypted before writing.
> *Key Read Path(Decryption)*
> While reading the Key, we check for gdprEnabled=true and they get the 
> GDPRSymmetricKey based on secret/algorithm as fetched from Key Metadata.
> Create a stream to decrypt the key and pass it on to client.
> *Test*
> Create Key in GDPR Enabled Bucket -> Read Key -> Verify content is as 
> expected -> Update Key Metadata to remove the gdprEnabled flag -> Read Key -> 
> Confirm the content is not as expected.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14090) RBF: Improved isolation for downstream name nodes. {Static}

2019-09-06 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924587#comment-16924587
 ] 

Íñigo Goiri commented on HDFS-14090:


Thanks [~crh], it's cleaner with the IllegalArgumentException.

Minor comments:
* In the TestRouterHandlersFairness#startLoadTest:
** It would be good to have a javadoc even though is a private.
** No need to catch exception and then fail. Actually, when getting the results 
in line 171, if there is an exception, we should get it wrapped in an 
ExecutionException, we should catch those and throw them.
** Let's make numOps final.

> RBF: Improved isolation for downstream name nodes. {Static}
> ---
>
> Key: HDFS-14090
> URL: https://issues.apache.org/jira/browse/HDFS-14090
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14090-HDFS-13891.001.patch, 
> HDFS-14090-HDFS-13891.002.patch, HDFS-14090-HDFS-13891.003.patch, 
> HDFS-14090-HDFS-13891.004.patch, HDFS-14090-HDFS-13891.005.patch, 
> HDFS-14090.006.patch, HDFS-14090.007.patch, HDFS-14090.008.patch, 
> HDFS-14090.009.patch, HDFS-14090.010.patch, HDFS-14090.011.patch, 
> HDFS-14090.012.patch, RBF_ Isolation design.pdf
>
>
> Router is a gateway to underlying name nodes. Gateway architectures, should 
> help minimize impact of clients connecting to healthy clusters vs unhealthy 
> clusters.
> For example - If there are 2 name nodes downstream, and one of them is 
> heavily loaded with calls spiking rpc queue times, due to back pressure the 
> same with start reflecting on the router. As a result of this, clients 
> connecting to healthy/faster name nodes will also slow down as same rpc queue 
> is maintained for all calls at the router layer. Essentially the same IPC 
> thread pool is used by router to connect to all name nodes.
> Currently router uses one single rpc queue for all calls. Lets discuss how we 
> can change the architecture and add some throttling logic for 
> unhealthy/slow/overloaded name nodes.
> One way could be to read from current call queue, immediately identify 
> downstream name node and maintain a separate queue for each underlying name 
> node. Another simpler way is to maintain some sort of rate limiter configured 
> for each name node and let routers drop/reject/send error requests after 
> certain threshold. 
> This won’t be a simple change as router’s ‘Server’ layer would need redesign 
> and implementation. Currently this layer is the same as name node.
> Opening this ticket to discuss, design and implement this feature.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2007) Make ozone fs shell command work with OM HA service ids

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2007?focusedWorklogId=308153=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308153
 ]

ASF GitHub Bot logged work on HDDS-2007:


Author: ASF GitHub Bot
Created on: 06/Sep/19 20:48
Start Date: 06/Sep/19 20:48
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #1360: HDDS-2007. 
Make ozone fs shell command work with OM HA service ids 
URL: https://github.com/apache/hadoop/pull/1360#discussion_r321904661
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientFactory.java
 ##
 @@ -136,6 +136,31 @@ public static OzoneClient getRpcClient(String omHost, 
Integer omRpcPort,
 return getRpcClient(config);
   }
 
+  /**
+   * Returns an OzoneClient which will use RPC protocol.
+   *
+   * @param omServiceId
+   *Service ID of OzoneManager HA cluster.
+   *
+   * @param config
+   *Configuration to be used for OzoneClient creation
+   *
+   * @return OzoneClient
+   *
+   * @throws IOException
+   */
+  public static OzoneClient getRpcClient(String omServiceId,
+  Configuration config)
+  throws IOException {
+Preconditions.checkNotNull(omServiceId);
+Preconditions.checkNotNull(config);
+// Override ozone.om.address just in case it is used later.
+// Because if this is not overridden, the (incorrect) value from xml
+// will be used?
+config.set(OZONE_OM_ADDRESS_KEY, omServiceId);
 
 Review comment:
   As discussed, I will remove this one. Thanks for pointing out!
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308153)
Time Spent: 2.5h  (was: 2h 20m)

> Make ozone fs shell command work with OM HA service ids
> ---
>
> Key: HDDS-2007
> URL: https://issues.apache.org/jira/browse/HDDS-2007
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Client
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Build an HDFS HA-like nameservice for OM HA so that Ozone client can access 
> Ozone HA cluster with ease.
> The majority of the work is already done in HDDS-972. But the problem is that 
> the client would crash if there are more than one service ids 
> (ozone.om.service.ids) configured in ozone-site.xml. This need to be address 
> on client side.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2007) Make ozone fs shell command work with OM HA service ids

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2007?focusedWorklogId=308142=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308142
 ]

ASF GitHub Bot logged work on HDDS-2007:


Author: ASF GitHub Bot
Created on: 06/Sep/19 20:41
Start Date: 06/Sep/19 20:41
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #1360: HDDS-2007. 
Make ozone fs shell command work with OM HA service ids 
URL: https://github.com/apache/hadoop/pull/1360#discussion_r321902382
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientFactory.java
 ##
 @@ -136,6 +136,31 @@ public static OzoneClient getRpcClient(String omHost, 
Integer omRpcPort,
 return getRpcClient(config);
   }
 
+  /**
+   * Returns an OzoneClient which will use RPC protocol.
+   *
+   * @param omServiceId
+   *Service ID of OzoneManager HA cluster.
+   *
+   * @param config
+   *Configuration to be used for OzoneClient creation
+   *
+   * @return OzoneClient
+   *
+   * @throws IOException
+   */
+  public static OzoneClient getRpcClient(String omServiceId,
 
 Review comment:
   @bharatviswa504 Yeah I believe we discussed about that and we agreed to do 
this. But merging all those `getRpcClient()` calls will change something out of 
the scope of this jira. We should probably open another jira to merge those 
functions.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308142)
Time Spent: 2h 20m  (was: 2h 10m)

> Make ozone fs shell command work with OM HA service ids
> ---
>
> Key: HDDS-2007
> URL: https://issues.apache.org/jira/browse/HDDS-2007
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Client
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Build an HDFS HA-like nameservice for OM HA so that Ozone client can access 
> Ozone HA cluster with ease.
> The majority of the work is already done in HDDS-972. But the problem is that 
> the client would crash if there are more than one service ids 
> (ozone.om.service.ids) configured in ozone-site.xml. This need to be address 
> on client side.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1970) Upgrade Bootstrap and jQuery versions of Ozone web UIs

2019-09-06 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian resolved HDDS-1970.
--
Resolution: Fixed

> Upgrade Bootstrap and jQuery versions of Ozone web UIs 
> ---
>
> Key: HDDS-1970
> URL: https://issues.apache.org/jira/browse/HDDS-1970
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: website
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> The current versions of bootstrap and jquery used by Ozone web UIs are 
> reported to have known medium severity CVEs and need to be updated to the 
> latest versions.
>  
> I suggest updating bootstrap and jQuery to 3.4.1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14528) Failover from Active to Standby Failed

2019-09-06 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924580#comment-16924580
 ] 

Hadoop QA commented on HDFS-14528:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  8m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 35s{color} | {color:orange} root: The patch generated 12 new + 42 unchanged 
- 0 fixed = 54 total (was 42) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 42s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}115m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
56s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}250m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestIPC |
|   | hadoop.hdfs.TestFileChecksumCompositeCrc |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestErasureCodingMultipleRacks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14528 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/1297/HDFS-14528.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 493cf040d682 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d98c548 |
| 

[jira] [Commented] (HDFS-14090) RBF: Improved isolation for downstream name nodes. {Static}

2019-09-06 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924571#comment-16924571
 ] 

Hadoop QA commented on HDFS-14090:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 23m 44s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
|   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14090 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979704/HDFS-14090.012.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux cecfc96c6fc1 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b15c116 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27807/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27807/testReport/ |
| Max. process+thread count | 1597 (vs. 

[jira] [Commented] (HDFS-14793) BlockTokenSecretManager should LOG block token range it operates on.

2019-09-06 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924569#comment-16924569
 ] 

hemanthboyina commented on HDFS-14793:
--

submitted patch , please check [~shv]

> BlockTokenSecretManager should LOG block token range it operates on.
> 
>
> Key: HDFS-14793
> URL: https://issues.apache.org/jira/browse/HDFS-14793
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.0
>Reporter: Konstantin Shvachko
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14793.001.patch
>
>
> At startup log enough information to identified the range of block token keys 
> for the NameNode. This should make it easier to debug issues with block 
> tokens.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14793) BlockTokenSecretManager should LOG block token range it operates on.

2019-09-06 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924565#comment-16924565
 ] 

Hadoop QA commented on HDFS-14793:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
51s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 34s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14793 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979707/HDFS-14793.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6a69e0053e6f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b15c116 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27808/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27808/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27808/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| mvnsite | 

[jira] [Work logged] (HDDS-2098) Ozone shell command prints out ERROR when the log4j file is not present.

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2098?focusedWorklogId=308124=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308124
 ]

ASF GitHub Bot logged work on HDDS-2098:


Author: ASF GitHub Bot
Created on: 06/Sep/19 20:00
Start Date: 06/Sep/19 20:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1411: HDDS-2098 : 
Ozone shell command prints out ERROR when the log4j file …
URL: https://github.com/apache/hadoop/pull/1411#issuecomment-528994454
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 108 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 724 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 896 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 616 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | shellcheck | 33 | The patch generated 1 new + 3 unchanged - 0 fixed = 
4 total (was 3) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 770 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 112 | hadoop-hdds in the patch passed. |
   | +1 | unit | 297 | hadoop-ozone in the patch passed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 3806 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1411/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1411 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs |
   | uname | Linux 5ce4281a3c08 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b15c116 |
   | shellcheck | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1411/1/artifact/out/diff-patch-shellcheck.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1411/1/testReport/ |
   | Max. process+thread count | 306 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common U: hadoop-ozone/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1411/1/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308124)
Time Spent: 40m  (was: 0.5h)

> Ozone shell command prints out ERROR when the log4j file is not present.
> 
>
> Key: HDDS-2098
> URL: https://issues.apache.org/jira/browse/HDDS-2098
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> *Exception Trace*
> {code}
> log4j:ERROR Could not read configuration file from URL 
> [file:/etc/ozone/conf/ozone-shell-log4j.properties].
> java.io.FileNotFoundException: /etc/ozone/conf/ozone-shell-log4j.properties 
> (No such file or directory)
>   at java.io.FileInputStream.open0(Native Method)
>   at java.io.FileInputStream.open(FileInputStream.java:195)
>   at java.io.FileInputStream.(FileInputStream.java:138)
>   at java.io.FileInputStream.(FileInputStream.java:93)
>   at 
> sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
>   at 
> 

[jira] [Work logged] (HDDS-2098) Ozone shell command prints out ERROR when the log4j file is not present.

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2098?focusedWorklogId=308123=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308123
 ]

ASF GitHub Bot logged work on HDDS-2098:


Author: ASF GitHub Bot
Created on: 06/Sep/19 20:00
Start Date: 06/Sep/19 20:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1411: HDDS-2098 
: Ozone shell command prints out ERROR when the log4j file …
URL: https://github.com/apache/hadoop/pull/1411#discussion_r321889337
 
 

 ##
 File path: hadoop-ozone/common/src/main/bin/ozone
 ##
 @@ -69,6 +69,12 @@ function ozonecmd_case
   subcmd=$1
   shift
 
+  ozone_default_log4j="${HADOOP_CONF_DIR}/log4j.properties"
+  ozone_shell_log4j="${HADOOP_CONF_DIR}/ozone-shell-log4j.properties"
+  if [ ! -f ${ozone_shell_log4j} ]; then
 
 Review comment:
   shellcheck:13: note: Double quote to prevent globbing and word splitting. 
[SC2086]
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308123)
Time Spent: 0.5h  (was: 20m)

> Ozone shell command prints out ERROR when the log4j file is not present.
> 
>
> Key: HDDS-2098
> URL: https://issues.apache.org/jira/browse/HDDS-2098
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> *Exception Trace*
> {code}
> log4j:ERROR Could not read configuration file from URL 
> [file:/etc/ozone/conf/ozone-shell-log4j.properties].
> java.io.FileNotFoundException: /etc/ozone/conf/ozone-shell-log4j.properties 
> (No such file or directory)
>   at java.io.FileInputStream.open0(Native Method)
>   at java.io.FileInputStream.open(FileInputStream.java:195)
>   at java.io.FileInputStream.(FileInputStream.java:138)
>   at java.io.FileInputStream.(FileInputStream.java:93)
>   at 
> sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
>   at 
> sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
>   at 
> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557)
>   at 
> org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
>   at org.apache.log4j.LogManager.(LogManager.java:127)
>   at org.slf4j.impl.Log4jLoggerFactory.(Log4jLoggerFactory.java:66)
>   at org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:72)
>   at 
> org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:45)
>   at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
>   at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
>   at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.(Shell.java:35)
> log4j:ERROR Ignoring configuration file 
> [file:/etc/ozone/conf/ozone-shell-log4j.properties].
> log4j:WARN No appenders could be found for logger 
> (io.jaegertracing.thrift.internal.senders.ThriftSenderFactory).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> {
>   "metadata" : { },
>   "name" : "vol-test-putfile-1567740142",
>   "admin" : "root",
>   "owner" : "root",
>   "creationTime" : 1567740146501,
>   "acls" : [ {
> "type" : "USER",
> "name" : "root",
> "aclScope" : "ACCESS",
> "aclList" : [ "ALL" ]
>   }, {
> "type" : "GROUP",
> "name" : "root",
> "aclScope" : "ACCESS",
> "aclList" : [ "ALL" ]
>   } ],
>   "quota" : 1152921504606846976
> }
> {code}
> *Fix*
> When a log4j file is not present, the default should be console.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14174) Enhance Audit for chown ( internally setOwner)

2019-09-06 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924562#comment-16924562
 ] 

Ayush Saxena commented on HDFS-14174:
-

Guess Wei-Chiu had some concerns regarding incompatibility, 
May be you guys can share the logics or use-case behind the change.
Anyway I know about incompatibilities but  adding a configuration for a audit 
log seems little overkill to me?

> Enhance Audit for chown ( internally setOwner)   
> -
>
> Key: HDFS-14174
> URL: https://issues.apache.org/jira/browse/HDFS-14174
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Sailesh Patel
>Assignee: hemanthboyina
>Priority: Minor
>
> When a hdfs dfs -chown  command is executed, the audit log  does not capture 
> the  existing owner and the new owner.    
> Need to capture the old and new owner to allow auditing to be effective
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14816) TestFileCorruption#testCorruptionWithDiskFailure logic is not correct

2019-09-06 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924559#comment-16924559
 ] 

hemanthboyina commented on HDFS-14816:
--

         _It would be great if you could share some findings and logic behind._

the test case is added for DN storage state - FAILED condition check for 
findandmarkblockascorrupt
through the existing code ,the blocks datanode storage is not getting updated 
to failed , so when calling findandmarkblockascorrupt the block datanode 
storage state is NORMAL  , by this we are not achieving what we want to test

 

updated the blocks DN state as failed , storage state will be FAILED in 
findandmarkblockascorrupt call


 

> TestFileCorruption#testCorruptionWithDiskFailure logic is not correct
> -
>
> Key: HDFS-14816
> URL: https://issues.apache.org/jira/browse/HDFS-14816
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14816.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14816) TestFileCorruption#testCorruptionWithDiskFailure logic is not correct

2019-09-06 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924549#comment-16924549
 ] 

Ayush Saxena commented on HDFS-14816:
-

Had a very quick look on this.
[~hemanthboyina] can you help me what actually is the problem, I think in the 
present logic it is making all the storage as corrupt and the block id doesn't 
play any such role since {{ infos[i].updateFromStorage(storage);}} doesn't use 
blk id. It would be great if you could share some findings and logic behind.

> TestFileCorruption#testCorruptionWithDiskFailure logic is not correct
> -
>
> Key: HDFS-14816
> URL: https://issues.apache.org/jira/browse/HDFS-14816
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14816.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2087) Remove the hard coded config key in ChunkManager

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2087?focusedWorklogId=308116=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308116
 ]

ASF GitHub Bot logged work on HDDS-2087:


Author: ASF GitHub Bot
Created on: 06/Sep/19 19:30
Start Date: 06/Sep/19 19:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1409: HDDS-2087. 
Remove the hard coded config key in ChunkManager
URL: https://github.com/apache/hadoop/pull/1409#issuecomment-528985415
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for branch |
   | +1 | mvninstall | 592 | trunk passed |
   | +1 | compile | 378 | trunk passed |
   | +1 | checkstyle | 80 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 878 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | trunk passed |
   | 0 | spotbugs | 420 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 615 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 39 | Maven dependency ordering for patch |
   | +1 | mvninstall | 537 | the patch passed |
   | +1 | compile | 385 | the patch passed |
   | +1 | javac | 385 | the patch passed |
   | +1 | checkstyle | 87 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 683 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | the patch passed |
   | +1 | findbugs | 630 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 283 | hadoop-hdds in the patch passed. |
   | -1 | unit | 184 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 6025 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1409/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1409 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 494335476d80 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a234175 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1409/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1409/2/testReport/ |
   | Max. process+thread count | 1286 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1409/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308116)
Time Spent: 1h  (was: 50m)

> Remove the hard coded config key in ChunkManager
> 
>
> Key: HDDS-2087
> URL: https://issues.apache.org/jira/browse/HDDS-2087
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Anu Engineer
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> We have a hard-coded config key in the {{ChunkManagerFactory.java.}}
>  
> {code}
> boolean scrubber = config.getBoolean(
>  

[jira] [Commented] (HDFS-14174) Enhance Audit for chown ( internally setOwner)

2019-09-06 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924547#comment-16924547
 ] 

hemanthboyina commented on HDFS-14174:
--

[~jojochuang] [~ayushtkn] can we go ahead with this ?

> Enhance Audit for chown ( internally setOwner)   
> -
>
> Key: HDFS-14174
> URL: https://issues.apache.org/jira/browse/HDFS-14174
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Sailesh Patel
>Assignee: hemanthboyina
>Priority: Minor
>
> When a hdfs dfs -chown  command is executed, the audit log  does not capture 
> the  existing owner and the new owner.    
> Need to capture the old and new owner to allow auditing to be effective
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14754) Erasure Coding : The number of Under-Replicated Blocks never reduced

2019-09-06 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924546#comment-16924546
 ] 

hemanthboyina commented on HDFS-14754:
--

fixed check style issues , updated the patch 

> Erasure Coding :  The number of Under-Replicated Blocks never reduced
> -
>
> Key: HDFS-14754
> URL: https://issues.apache.org/jira/browse/HDFS-14754
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Critical
> Attachments: HDFS-14754.001.patch, HDFS-14754.002.patch, 
> HDFS-14754.003.patch, HDFS-14754.004.patch, HDFS-14754.005.patch
>
>
> Using EC RS-3-2, 6 DN 
> We came accross a scenario where in the EC 5 blocks , same block is 
> replicated thrice and two blocks got missing
> Replicated block was not deleting and missing block is not able to ReConstruct



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14816) TestFileCorruption#testCorruptionWithDiskFailure logic is not correct

2019-09-06 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924544#comment-16924544
 ] 

hemanthboyina commented on HDFS-14816:
--

[~ayushtkn] , can you have a look into this patch ?

> TestFileCorruption#testCorruptionWithDiskFailure logic is not correct
> -
>
> Key: HDFS-14816
> URL: https://issues.apache.org/jira/browse/HDFS-14816
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14816.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14754) Erasure Coding : The number of Under-Replicated Blocks never reduced

2019-09-06 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924542#comment-16924542
 ] 

Hadoop QA commented on HDFS-14754:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 51s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 475 unchanged - 
0 fixed = 476 total (was 475) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
|   | hadoop.hdfs.server.balancer.TestBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14754 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979692/HDFS-14754.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e579d3fb57d1 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d98c548 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27805/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27805/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27805/testReport/ |
| Max. process+thread 

[jira] [Commented] (HDFS-14811) RBF: TestRouterRpc#testErasureCoding is flaky

2019-09-06 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924540#comment-16924540
 ] 

Ayush Saxena commented on HDFS-14811:
-

Thanx [~zhangchen] for the details. I just had a cursory look on the logics. I 
need to dig in more. May be for the DataXceiver part you can raise a separate 
JIRA.  This shall be beyond the scope of what we are fixing here.
The most reasonable fix to the UT, seems to be changing the conf. itself, since 
I think it gurantees no failure due to this.
You too give a check running some bunch of times the whole class if it fairs 
well.

> RBF: TestRouterRpc#testErasureCoding is flaky
> -
>
> Key: HDFS-14811
> URL: https://issues.apache.org/jira/browse/HDFS-14811
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14811.001.patch
>
>
> The Failed reason:
> {code:java}
> 2019-09-01 18:19:20,940 [IPC Server handler 5 on default port 53140] INFO  
> blockmanagement.BlockPlacementPolicy 
> (BlockPlacementPolicyDefault.java:chooseRandom(838)) - [
> Node /default-rack/127.0.0.1:53148 [
> ]
> Node /default-rack/127.0.0.1:53161 [
> ]
> Node /default-rack/127.0.0.1:53157 [
>   Datanode 127.0.0.1:53157 is not chosen since the node is too busy (load: 3 
> > 2.6665).
> Node /default-rack/127.0.0.1:53143 [
> ]
> Node /default-rack/127.0.0.1:53165 [
> ]
> 2019-09-01 18:19:20,940 [IPC Server handler 5 on default port 53140] INFO  
> blockmanagement.BlockPlacementPolicy 
> (BlockPlacementPolicyDefault.java:chooseRandom(846)) - Not enough replicas 
> was chosen. Reason: {NODE_TOO_BUSY=1}
> 2019-09-01 18:19:20,941 [IPC Server handler 5 on default port 53140] WARN  
> blockmanagement.BlockPlacementPolicy 
> (BlockPlacementPolicyDefault.java:chooseTarget(449)) - Failed to place enough 
> replicas, still in need of 1 to reach 6 (unavailableStorages=[], 
> storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
> creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) 
> 2019-09-01 18:19:20,941 [IPC Server handler 5 on default port 53140] WARN  
> protocol.BlockStoragePolicy (BlockStoragePolicy.java:chooseStorageTypes(161)) 
> - Failed to place enough replicas: expected size is 1 but only 0 storage 
> types can be selected (replication=6, selected=[], unavailable=[DISK], 
> removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
> creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
> 2019-09-01 18:19:20,941 [IPC Server handler 5 on default port 53140] WARN  
> blockmanagement.BlockPlacementPolicy 
> (BlockPlacementPolicyDefault.java:chooseTarget(449)) - Failed to place enough 
> replicas, still in need of 1 to reach 6 (unavailableStorages=[DISK], 
> storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
> creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All 
> required storage types are unavailable:  unavailableStorages=[DISK], 
> storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
> creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
> 2019-09-01 18:19:20,941 [IPC Server handler 5 on default port 53140] INFO  
> ipc.Server (Server.java:logException(2982)) - IPC Server handler 5 on default 
> port 53140, call Call#1270 Retry#0 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 127.0.0.1:53202
> java.io.IOException: File /testec/testfile2 could only be written to 5 of the 
> 6 required nodes for RS-6-3-1024k. There are 6 datanode(s) running and 6 
> node(s) are excluded in this operation.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2815)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:893)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:574)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:529)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1001)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:929)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   

[jira] [Updated] (HDFS-14793) BlockTokenSecretManager should LOG block token range it operates on.

2019-09-06 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14793:
-
Attachment: HDFS-14793.001.patch
Status: Patch Available  (was: Open)

> BlockTokenSecretManager should LOG block token range it operates on.
> 
>
> Key: HDFS-14793
> URL: https://issues.apache.org/jira/browse/HDFS-14793
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.0
>Reporter: Konstantin Shvachko
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14793.001.patch
>
>
> At startup log enough information to identified the range of block token keys 
> for the NameNode. This should make it easier to debug issues with block 
> tokens.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12831) HDFS throws FileNotFoundException on getFileBlockLocations(path-to-directory)

2019-09-06 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924537#comment-16924537
 ] 

Ayush Saxena commented on HDFS-12831:
-

[~ste...@apache.org] so what is the conclusion from your side.
Do we go ahead changing the type of exception or changing to empty array? 

> HDFS throws FileNotFoundException on getFileBlockLocations(path-to-directory)
> -
>
> Key: HDFS-12831
> URL: https://issues.apache.org/jira/browse/HDFS-12831
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.2
>Reporter: Steve Loughran
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-12831.001.patch
>
>
> The HDFS implementation of {{getFileBlockLocations(path, offset, len)}} 
> throws an exception if the path references a directory. 
> The base implementation (and all other filesystems) just return an empty 
> array, something implemented in {{getFileBlockLocations(filestatsus, offset, 
> len)}}; something written up in filesystem.md as the correct behaviour. 
> # has been shown to break things: SPARK-14959
> # there's no contract tests for these APIs; shows up in HADOOP-15044. 
> # even if this is considered a wontfix, it should raise something like 
> {{PathIsDirectoryException}} rather than FNFE



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1666) Improve logic in openKey when allocating block

2019-09-06 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1666:

Fix Version/s: (was: 0.5.0)
   0.4.1

> Improve logic in openKey when allocating block
> --
>
> Key: HDDS-1666
> URL: https://issues.apache.org/jira/browse/HDDS-1666
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> We set size as below
> {code}
> final long size = args.getDataSize() >= 0 ?
>  args.getDataSize() : scmBlockSize;
> {code}
>  
> and create OmKeyInfo with below size set. But when allocating Block for 
> openKey, we use as below.
> allocateBlockInKey(keyInfo, args.getDataSize(), currentTime);
>  
> I feel here, we should use size which is set above so that we allocate at 
> least a block when the openKey call happens.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14793) BlockTokenSecretManager should LOG block token range it operates on.

2019-09-06 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina reassigned HDFS-14793:


Assignee: hemanthboyina

> BlockTokenSecretManager should LOG block token range it operates on.
> 
>
> Key: HDFS-14793
> URL: https://issues.apache.org/jira/browse/HDFS-14793
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.0
>Reporter: Konstantin Shvachko
>Assignee: hemanthboyina
>Priority: Major
>
> At startup log enough information to identified the range of block token keys 
> for the NameNode. This should make it easier to debug issues with block 
> tokens.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2087) Remove the hard coded config key in ChunkManager

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2087?focusedWorklogId=308107=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308107
 ]

ASF GitHub Bot logged work on HDDS-2087:


Author: ASF GitHub Bot
Created on: 06/Sep/19 19:09
Start Date: 06/Sep/19 19:09
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1409: HDDS-2087. Remove 
the hard coded config key in ChunkManager
URL: https://github.com/apache/hadoop/pull/1409#issuecomment-528978496
 
 
   +1, I will commit this after the test run. Thx
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308107)
Time Spent: 50m  (was: 40m)

> Remove the hard coded config key in ChunkManager
> 
>
> Key: HDDS-2087
> URL: https://issues.apache.org/jira/browse/HDDS-2087
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Anu Engineer
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> We have a hard-coded config key in the {{ChunkManagerFactory.java.}}
>  
> {code}
> boolean scrubber = config.getBoolean(
>  "hdds.containerscrub.enabled",
>  false);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2098) Ozone shell command prints out ERROR when the log4j file is not present.

2019-09-06 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-2098:

Description: 
*Exception Trace*
{code}
log4j:ERROR Could not read configuration file from URL 
[file:/etc/ozone/conf/ozone-shell-log4j.properties].
java.io.FileNotFoundException: /etc/ozone/conf/ozone-shell-log4j.properties (No 
such file or directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.(FileInputStream.java:138)
at java.io.FileInputStream.(FileInputStream.java:93)
at 
sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
at 
sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
at 
org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557)
at 
org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.(Log4jLoggerFactory.java:66)
at org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:72)
at 
org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:45)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
at org.apache.hadoop.ozone.web.ozShell.Shell.(Shell.java:35)
log4j:ERROR Ignoring configuration file 
[file:/etc/ozone/conf/ozone-shell-log4j.properties].
log4j:WARN No appenders could be found for logger 
(io.jaegertracing.thrift.internal.senders.ThriftSenderFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.
{
  "metadata" : { },
  "name" : "vol-test-putfile-1567740142",
  "admin" : "root",
  "owner" : "root",
  "creationTime" : 1567740146501,
  "acls" : [ {
"type" : "USER",
"name" : "root",
"aclScope" : "ACCESS",
"aclList" : [ "ALL" ]
  }, {
"type" : "GROUP",
"name" : "root",
"aclScope" : "ACCESS",
"aclList" : [ "ALL" ]
  } ],
  "quota" : 1152921504606846976
}
{code}


*Fix*
When a log4j file is not present, the default should be console.

  was:When a log4j file is not present, the default should be console.


> Ozone shell command prints out ERROR when the log4j file is not present.
> 
>
> Key: HDDS-2098
> URL: https://issues.apache.org/jira/browse/HDDS-2098
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> *Exception Trace*
> {code}
> log4j:ERROR Could not read configuration file from URL 
> [file:/etc/ozone/conf/ozone-shell-log4j.properties].
> java.io.FileNotFoundException: /etc/ozone/conf/ozone-shell-log4j.properties 
> (No such file or directory)
>   at java.io.FileInputStream.open0(Native Method)
>   at java.io.FileInputStream.open(FileInputStream.java:195)
>   at java.io.FileInputStream.(FileInputStream.java:138)
>   at java.io.FileInputStream.(FileInputStream.java:93)
>   at 
> sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
>   at 
> sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
>   at 
> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557)
>   at 
> org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
>   at org.apache.log4j.LogManager.(LogManager.java:127)
>   at org.slf4j.impl.Log4jLoggerFactory.(Log4jLoggerFactory.java:66)
>   at org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:72)
>   at 
> org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:45)
>   at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
>   at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
>   at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.(Shell.java:35)
> log4j:ERROR Ignoring configuration file 
> 

[jira] [Work logged] (HDDS-2098) Ozone shell command prints out ERROR when the log4j file is not present.

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2098?focusedWorklogId=308091=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308091
 ]

ASF GitHub Bot logged work on HDDS-2098:


Author: ASF GitHub Bot
Created on: 06/Sep/19 18:57
Start Date: 06/Sep/19 18:57
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #1411: HDDS-2098 : Ozone 
shell command prints out ERROR when the log4j file …
URL: https://github.com/apache/hadoop/pull/1411#issuecomment-528973963
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308091)
Time Spent: 20m  (was: 10m)

> Ozone shell command prints out ERROR when the log4j file is not present.
> 
>
> Key: HDDS-2098
> URL: https://issues.apache.org/jira/browse/HDDS-2098
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When a log4j file is not present, the default should be console.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2098) Ozone shell command prints out ERROR when the log4j file is not present.

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2098?focusedWorklogId=308090=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308090
 ]

ASF GitHub Bot logged work on HDDS-2098:


Author: ASF GitHub Bot
Created on: 06/Sep/19 18:56
Start Date: 06/Sep/19 18:56
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #1411: HDDS-2098 
: Ozone shell command prints out ERROR when the log4j file …
URL: https://github.com/apache/hadoop/pull/1411
 
 
   …is not present.
   
   
   Manually tested change on cluster.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308090)
Remaining Estimate: 0h
Time Spent: 10m

> Ozone shell command prints out ERROR when the log4j file is not present.
> 
>
> Key: HDDS-2098
> URL: https://issues.apache.org/jira/browse/HDDS-2098
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When a log4j file is not present, the default should be console.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14090) RBF: Improved isolation for downstream name nodes. {Static}

2019-09-06 Thread CR Hota (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-14090:
---
Attachment: HDFS-14090.012.patch

> RBF: Improved isolation for downstream name nodes. {Static}
> ---
>
> Key: HDFS-14090
> URL: https://issues.apache.org/jira/browse/HDFS-14090
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14090-HDFS-13891.001.patch, 
> HDFS-14090-HDFS-13891.002.patch, HDFS-14090-HDFS-13891.003.patch, 
> HDFS-14090-HDFS-13891.004.patch, HDFS-14090-HDFS-13891.005.patch, 
> HDFS-14090.006.patch, HDFS-14090.007.patch, HDFS-14090.008.patch, 
> HDFS-14090.009.patch, HDFS-14090.010.patch, HDFS-14090.011.patch, 
> HDFS-14090.012.patch, RBF_ Isolation design.pdf
>
>
> Router is a gateway to underlying name nodes. Gateway architectures, should 
> help minimize impact of clients connecting to healthy clusters vs unhealthy 
> clusters.
> For example - If there are 2 name nodes downstream, and one of them is 
> heavily loaded with calls spiking rpc queue times, due to back pressure the 
> same with start reflecting on the router. As a result of this, clients 
> connecting to healthy/faster name nodes will also slow down as same rpc queue 
> is maintained for all calls at the router layer. Essentially the same IPC 
> thread pool is used by router to connect to all name nodes.
> Currently router uses one single rpc queue for all calls. Lets discuss how we 
> can change the architecture and add some throttling logic for 
> unhealthy/slow/overloaded name nodes.
> One way could be to read from current call queue, immediately identify 
> downstream name node and maintain a separate queue for each underlying name 
> node. Another simpler way is to maintain some sort of rate limiter configured 
> for each name node and let routers drop/reject/send error requests after 
> certain threshold. 
> This won’t be a simple change as router’s ‘Server’ layer would need redesign 
> and implementation. Currently this layer is the same as name node.
> Opening this ticket to discuss, design and implement this feature.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2098) Ozone shell command prints out ERROR when the log4j file is not present.

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2098:
-
Labels: pull-request-available  (was: )

> Ozone shell command prints out ERROR when the log4j file is not present.
> 
>
> Key: HDDS-2098
> URL: https://issues.apache.org/jira/browse/HDDS-2098
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>
> When a log4j file is not present, the default should be console.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2098) Ozone shell command prints out ERROR when the log4j file is not present.

2019-09-06 Thread Aravindan Vijayan (Jira)
Aravindan Vijayan created HDDS-2098:
---

 Summary: Ozone shell command prints out ERROR when the log4j file 
is not present.
 Key: HDDS-2098
 URL: https://issues.apache.org/jira/browse/HDDS-2098
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone CLI
Affects Versions: 0.5.0
Reporter: Aravindan Vijayan
Assignee: Aravindan Vijayan
 Fix For: 0.5.0


When a log4j file is not present, the default should be console.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2015) Encrypt/decrypt key using symmetric key while writing/reading

2019-09-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924509#comment-16924509
 ] 

Hudson commented on HDDS-2015:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17246 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17246/])
HDDS-2015. Encrypt/decrypt key using symmetric key while writing/reading 
(aengineer: rev b15c116c1edaa71a3de86dbbab822ced9df37dbd)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/GDPRSymmetricKey.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRequest.java
* (edit) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/security/TestGDPRSymmetricKey.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/PutKeyHandler.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java


> Encrypt/decrypt key using symmetric key while writing/reading
> -
>
> Key: HDDS-2015
> URL: https://issues.apache.org/jira/browse/HDDS-2015
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> *Key Write Path (Encryption)*
> When a bucket metadata has gdprEnabled=true, we generate the GDPRSymmetricKey 
> and add it to Key Metadata before we create the Key.
> This ensures that key is encrypted before writing.
> *Key Read Path(Decryption)*
> While reading the Key, we check for gdprEnabled=true and they get the 
> GDPRSymmetricKey based on secret/algorithm as fetched from Key Metadata.
> Create a stream to decrypt the key and pass it on to client.
> *Test*
> Create Key in GDPR Enabled Bucket -> Read Key -> Verify content is as 
> expected -> Update Key Metadata to remove the gdprEnabled flag -> Read Key -> 
> Confirm the content is not as expected.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2015) Encrypt/decrypt key using symmetric key while writing/reading

2019-09-06 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2015:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to the trunk branch.

> Encrypt/decrypt key using symmetric key while writing/reading
> -
>
> Key: HDDS-2015
> URL: https://issues.apache.org/jira/browse/HDDS-2015
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> *Key Write Path (Encryption)*
> When a bucket metadata has gdprEnabled=true, we generate the GDPRSymmetricKey 
> and add it to Key Metadata before we create the Key.
> This ensures that key is encrypted before writing.
> *Key Read Path(Decryption)*
> While reading the Key, we check for gdprEnabled=true and they get the 
> GDPRSymmetricKey based on secret/algorithm as fetched from Key Metadata.
> Create a stream to decrypt the key and pass it on to client.
> *Test*
> Create Key in GDPR Enabled Bucket -> Read Key -> Verify content is as 
> expected -> Update Key Metadata to remove the gdprEnabled flag -> Read Key -> 
> Confirm the content is not as expected.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2015) Encrypt/decrypt key using symmetric key while writing/reading

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2015?focusedWorklogId=308085=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308085
 ]

ASF GitHub Bot logged work on HDDS-2015:


Author: ASF GitHub Bot
Created on: 06/Sep/19 18:43
Start Date: 06/Sep/19 18:43
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1386: HDDS-2015. 
Encrypt/decrypt key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#issuecomment-528969668
 
 
   @ajayydv @bharatviswa504  Thanks for comments. @dineshchitlangia  Thanks for 
the contribution. I have committed this patch to the trunk branch.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308085)
Time Spent: 5h 40m  (was: 5.5h)

> Encrypt/decrypt key using symmetric key while writing/reading
> -
>
> Key: HDDS-2015
> URL: https://issues.apache.org/jira/browse/HDDS-2015
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> *Key Write Path (Encryption)*
> When a bucket metadata has gdprEnabled=true, we generate the GDPRSymmetricKey 
> and add it to Key Metadata before we create the Key.
> This ensures that key is encrypted before writing.
> *Key Read Path(Decryption)*
> While reading the Key, we check for gdprEnabled=true and they get the 
> GDPRSymmetricKey based on secret/algorithm as fetched from Key Metadata.
> Create a stream to decrypt the key and pass it on to client.
> *Test*
> Create Key in GDPR Enabled Bucket -> Read Key -> Verify content is as 
> expected -> Update Key Metadata to remove the gdprEnabled flag -> Read Key -> 
> Confirm the content is not as expected.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2015) Encrypt/decrypt key using symmetric key while writing/reading

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2015?focusedWorklogId=308086=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308086
 ]

ASF GitHub Bot logged work on HDDS-2015:


Author: ASF GitHub Bot
Created on: 06/Sep/19 18:43
Start Date: 06/Sep/19 18:43
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1386: HDDS-2015. 
Encrypt/decrypt key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308086)
Time Spent: 5h 50m  (was: 5h 40m)

> Encrypt/decrypt key using symmetric key while writing/reading
> -
>
> Key: HDDS-2015
> URL: https://issues.apache.org/jira/browse/HDDS-2015
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> *Key Write Path (Encryption)*
> When a bucket metadata has gdprEnabled=true, we generate the GDPRSymmetricKey 
> and add it to Key Metadata before we create the Key.
> This ensures that key is encrypted before writing.
> *Key Read Path(Decryption)*
> While reading the Key, we check for gdprEnabled=true and they get the 
> GDPRSymmetricKey based on secret/algorithm as fetched from Key Metadata.
> Create a stream to decrypt the key and pass it on to client.
> *Test*
> Create Key in GDPR Enabled Bucket -> Read Key -> Verify content is as 
> expected -> Update Key Metadata to remove the gdprEnabled flag -> Read Key -> 
> Confirm the content is not as expected.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12831) HDFS throws FileNotFoundException on getFileBlockLocations(path-to-directory)

2019-09-06 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-12831:
-
   Attachment: HDFS-12831.001.patch
Affects Version/s: (was: 2.8.1)
   3.1.2
   Status: Patch Available  (was: Open)

> HDFS throws FileNotFoundException on getFileBlockLocations(path-to-directory)
> -
>
> Key: HDFS-12831
> URL: https://issues.apache.org/jira/browse/HDFS-12831
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.2
>Reporter: Steve Loughran
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-12831.001.patch
>
>
> The HDFS implementation of {{getFileBlockLocations(path, offset, len)}} 
> throws an exception if the path references a directory. 
> The base implementation (and all other filesystems) just return an empty 
> array, something implemented in {{getFileBlockLocations(filestatsus, offset, 
> len)}}; something written up in filesystem.md as the correct behaviour. 
> # has been shown to break things: SPARK-14959
> # there's no contract tests for these APIs; shows up in HADOOP-15044. 
> # even if this is considered a wontfix, it should raise something like 
> {{PathIsDirectoryException}} rather than FNFE



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14412) Enable Dynamometer to use the local build of Hadoop by default

2019-09-06 Thread Erik Krogen (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924490#comment-16924490
 ] 

Erik Krogen commented on HDFS-14412:


[~pingsutw] you're right, it looks like currently it's set up for Hadoop 3.1. 
We should definitely update it to be 3.2 and 3.3 compatible. I filed HDFS-14829 
for this.

> Enable Dynamometer to use the local build of Hadoop by default
> --
>
> Key: HDFS-14412
> URL: https://issues.apache.org/jira/browse/HDFS-14412
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Erik Krogen
>Assignee: kevin su
>Priority: Major
>
> Currently, by default, Dynamometer will download a Hadoop tarball from the 
> internet to use as the Hadoop version-under-test. Since it is bundled inside 
> of Hadoop now, it would make more sense for it to use the current version of 
> Hadoop by default.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14829) [Dynamometer] Update TestDynamometerInfra to be Hadoop 3.2+ compatible

2019-09-06 Thread Erik Krogen (Jira)
Erik Krogen created HDFS-14829:
--

 Summary: [Dynamometer] Update TestDynamometerInfra to be Hadoop 
3.2+ compatible
 Key: HDFS-14829
 URL: https://issues.apache.org/jira/browse/HDFS-14829
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Erik Krogen


Currently the integration test included with Dynamometer, 
{{TestDynamometerInfra}}, is executing against version 3.1.2 of Hadoop. We 
should update it to run against a more recent version by default (3.2.x) and 
add support for 3.3 in anticipation of HDFS-14412.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14609) RBF: Security should use common AuthenticationFilter

2019-09-06 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924471#comment-16924471
 ] 

Hadoop QA commented on HDFS-14609:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 24m 
33s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14609 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979690/HDFS-14609.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 24791e3c31a5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d98c548 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27804/testReport/ |
| Max. process+thread count | 1610 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27804/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Security should use common AuthenticationFilter
> 
>
> Key: HDFS-14609
> URL: 

[jira] [Moved] (HDDS-2097) Add TeraSort to acceptance test

2019-09-06 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao moved HDFS-14828 to HDDS-2097:
-

 Key: HDDS-2097  (was: HDFS-14828)
Workflow: patch-available, re-open possible  (was: no-reopen-closed, 
patch-avail)
 Project: Hadoop Distributed Data Store  (was: Hadoop HDFS)

> Add TeraSort to acceptance test
> ---
>
> Key: HDDS-2097
> URL: https://issues.apache.org/jira/browse/HDDS-2097
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Priority: Major
>
> We may begin with 1GB teragen/terasort/teravalidate.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14090) RBF: Improved isolation for downstream name nodes. {Static}

2019-09-06 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924468#comment-16924468
 ] 

Íñigo Goiri commented on HDFS-14090:


{quote}
Both are theoretically misconfigurations and hence wanted to keep them under 
the same umbrella of PermitAllocationException which all implementations should 
throw if allocation fails, and this failure will happen due to mis 
configurations.
{quote}
Right, both are at configuration.
Should we make it IllegalArgumentException then?

Regarding the FairCallQueue, should we add it to HDFS-14558?

> RBF: Improved isolation for downstream name nodes. {Static}
> ---
>
> Key: HDFS-14090
> URL: https://issues.apache.org/jira/browse/HDFS-14090
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14090-HDFS-13891.001.patch, 
> HDFS-14090-HDFS-13891.002.patch, HDFS-14090-HDFS-13891.003.patch, 
> HDFS-14090-HDFS-13891.004.patch, HDFS-14090-HDFS-13891.005.patch, 
> HDFS-14090.006.patch, HDFS-14090.007.patch, HDFS-14090.008.patch, 
> HDFS-14090.009.patch, HDFS-14090.010.patch, HDFS-14090.011.patch, RBF_ 
> Isolation design.pdf
>
>
> Router is a gateway to underlying name nodes. Gateway architectures, should 
> help minimize impact of clients connecting to healthy clusters vs unhealthy 
> clusters.
> For example - If there are 2 name nodes downstream, and one of them is 
> heavily loaded with calls spiking rpc queue times, due to back pressure the 
> same with start reflecting on the router. As a result of this, clients 
> connecting to healthy/faster name nodes will also slow down as same rpc queue 
> is maintained for all calls at the router layer. Essentially the same IPC 
> thread pool is used by router to connect to all name nodes.
> Currently router uses one single rpc queue for all calls. Lets discuss how we 
> can change the architecture and add some throttling logic for 
> unhealthy/slow/overloaded name nodes.
> One way could be to read from current call queue, immediately identify 
> downstream name node and maintain a separate queue for each underlying name 
> node. Another simpler way is to maintain some sort of rate limiter configured 
> for each name node and let routers drop/reject/send error requests after 
> certain threshold. 
> This won’t be a simple change as router’s ‘Server’ layer would need redesign 
> and implementation. Currently this layer is the same as name node.
> Opening this ticket to discuss, design and implement this feature.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14828) Add TeraSort to acceptance test

2019-09-06 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HDFS-14828:
-

 Summary: Add TeraSort to acceptance test
 Key: HDFS-14828
 URL: https://issues.apache.org/jira/browse/HDFS-14828
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Xiaoyu Yao


We may begin with 1GB teragen/terasort/teravalidate.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12831) HDFS throws FileNotFoundException on getFileBlockLocations(path-to-directory)

2019-09-06 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina reassigned HDFS-12831:


Assignee: hemanthboyina  (was: Hanisha Koneru)

> HDFS throws FileNotFoundException on getFileBlockLocations(path-to-directory)
> -
>
> Key: HDFS-12831
> URL: https://issues.apache.org/jira/browse/HDFS-12831
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: hemanthboyina
>Priority: Major
>
> The HDFS implementation of {{getFileBlockLocations(path, offset, len)}} 
> throws an exception if the path references a directory. 
> The base implementation (and all other filesystems) just return an empty 
> array, something implemented in {{getFileBlockLocations(filestatsus, offset, 
> len)}}; something written up in filesystem.md as the correct behaviour. 
> # has been shown to break things: SPARK-14959
> # there's no contract tests for these APIs; shows up in HADOOP-15044. 
> # even if this is considered a wontfix, it should raise something like 
> {{PathIsDirectoryException}} rather than FNFE



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14817) [Dynamometer] start-dynamometer-cluster.sh shows its usage even if correct arguments are given.

2019-09-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924452#comment-16924452
 ] 

Hudson commented on HDFS-14817:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17243 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17243/])
HDFS-14817. [Dynamometer] Fix start script options parsing which (xkrogen: rev 
9637097ef9b213fcbeffa2538ccb7e0aaabde9c4)
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/java/org/apache/hadoop/tools/dynamometer/Client.java


> [Dynamometer] start-dynamometer-cluster.sh shows its usage even if correct 
> arguments are given.
> ---
>
> Key: HDFS-14817
> URL: https://issues.apache.org/jira/browse/HDFS-14817
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Soya Miyoshi
>Assignee: Soya Miyoshi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14817.001.patch, HDFS-14817.002.patch, 
> HDFS-14817.003.patch
>
>
> When trying to launch the infrastructure application to begin the startup of 
> the internal HDFS cluster as shown in the Manual Workload Launch section in 
> [here|https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-dynamometer/Dynamometer.html]
>  {code:|borderStyle=solid}
> $ ./dynamometer-infra/bin/start-dynamometer-cluster.sh \
>  -hadoop_binary_path hadoop-3.0.2.tar.gz \
>  -conf_path my-hadoop-conf \
>  -fs_image_dir hdfs:///fsimage \
>  -block_list_path hdfs:///dyno/blocks
> {code}
>  its usage is always shown even if correct arguments are given, if 
> `-hadoop_binary_path` is placed as a first argument for the script.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14817) [Dynamometer] start-dynamometer-cluster.sh shows its usage even if correct arguments are given.

2019-09-06 Thread Erik Krogen (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-14817:
---
Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> [Dynamometer] start-dynamometer-cluster.sh shows its usage even if correct 
> arguments are given.
> ---
>
> Key: HDFS-14817
> URL: https://issues.apache.org/jira/browse/HDFS-14817
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Soya Miyoshi
>Assignee: Soya Miyoshi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14817.001.patch, HDFS-14817.002.patch, 
> HDFS-14817.003.patch
>
>
> When trying to launch the infrastructure application to begin the startup of 
> the internal HDFS cluster as shown in the Manual Workload Launch section in 
> [here|https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-dynamometer/Dynamometer.html]
>  {code:|borderStyle=solid}
> $ ./dynamometer-infra/bin/start-dynamometer-cluster.sh \
>  -hadoop_binary_path hadoop-3.0.2.tar.gz \
>  -conf_path my-hadoop-conf \
>  -fs_image_dir hdfs:///fsimage \
>  -block_list_path hdfs:///dyno/blocks
> {code}
>  its usage is always shown even if correct arguments are given, if 
> `-hadoop_binary_path` is placed as a first argument for the script.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14817) [Dynamometer] start-dynamometer-cluster.sh shows its usage even if correct arguments are given.

2019-09-06 Thread Erik Krogen (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924448#comment-16924448
 ] 

Erik Krogen commented on HDFS-14817:


The v3 patch LGTM, thanks [~soyamiyoshi]! I just committed this to trunk.

> [Dynamometer] start-dynamometer-cluster.sh shows its usage even if correct 
> arguments are given.
> ---
>
> Key: HDFS-14817
> URL: https://issues.apache.org/jira/browse/HDFS-14817
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Soya Miyoshi
>Assignee: Soya Miyoshi
>Priority: Major
> Attachments: HDFS-14817.001.patch, HDFS-14817.002.patch, 
> HDFS-14817.003.patch
>
>
> When trying to launch the infrastructure application to begin the startup of 
> the internal HDFS cluster as shown in the Manual Workload Launch section in 
> [here|https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-dynamometer/Dynamometer.html]
>  {code:|borderStyle=solid}
> $ ./dynamometer-infra/bin/start-dynamometer-cluster.sh \
>  -hadoop_binary_path hadoop-3.0.2.tar.gz \
>  -conf_path my-hadoop-conf \
>  -fs_image_dir hdfs:///fsimage \
>  -block_list_path hdfs:///dyno/blocks
> {code}
>  its usage is always shown even if correct arguments are given, if 
> `-hadoop_binary_path` is placed as a first argument for the script.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14819) [Dynamometer] Cannot parse audit logs with ‘=‘ in unexpected places when starting a workload.

2019-09-06 Thread Erik Krogen (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-14819:
---
Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

LGTM, thanks a lot [~soyamiyoshi]! I just committed this to trunk.

> [Dynamometer] Cannot parse audit logs with ‘=‘ in unexpected places when 
> starting a workload. 
> --
>
> Key: HDFS-14819
> URL: https://issues.apache.org/jira/browse/HDFS-14819
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Soya Miyoshi
>Assignee: Soya Miyoshi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14819.001.patch, HDFS-14819.002.patch, 
> HDFS-14819.003.patch
>
>
> When trying to launch a workload job, if any of the given audit logs’ values 
> contain `=` aside from at the ends of the log’s keys (such as `ugi`, `src`), 
> the audit log will not be parsed and an exception is thrown.
> For example, this audit log will result in exception, as it contains `=` in 
> the `src` value (“/projects/date=0822”).
>  {code:|borderStyle=solid}
> 2019-08-22 01:00:00,186 INFO FSNamesystem.audit: allowed=true   ugi=feed 
> (auth:a) ip=/119.472.323.333  cmd=getfileinfo
> src=/projects/date=0822 dst=null
> perm=null   proto=rpc
> {code}
> If the second `=` in `src=/projects/date=0822` is removed, it works fine. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-4819) Update Snapshot doc for HDFS-4758

2019-09-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-4819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924430#comment-16924430
 ] 

Hudson commented on HDFS-4819:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17240 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17240/])
HDFS-4819. [Dynamometer] Fix parsing of audit logs which contain = in (xkrogen: 
rev ae42c8cb61edcf69d0d6a9cf20ee9f936b0722fb)
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/test/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/audit/TestAuditLogDirectParser.java
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/main/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/audit/AuditLogDirectParser.java


> Update Snapshot doc for HDFS-4758
> -
>
> Key: HDFS-4819
> URL: https://issues.apache.org/jira/browse/HDFS-4819
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 2.1.0-beta
>
> Attachments: h4819_20130611.patch
>
>
> Update Snapshot doc to clarify that nested snapshots are not allowed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14090) RBF: Improved isolation for downstream name nodes. {Static}

2019-09-06 Thread CR Hota (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924427#comment-16924427
 ] 

CR Hota commented on HDFS-14090:


[~elgoiri] Thanks for the reviews. Some thoughts below.
{quote}My main issue is that PermitAllocationException is too generic.
 As you mention, it currently covers both (1) not enough handlers and (2) 
missconfigured nameservices.
 I think they should be two separate exceptions.
 The #1 case makes sense but the other one seems more like an 
IllegalArgumentException
{quote}
Both are theoretically misconfigurations and hence wanted to keep them under 
the same umbrella of PermitAllocationException which all implementations should 
throw if allocation fails, and this failure will happen due to mis 
configurations.
{quote} 
 BTW, should we also add the fairness per user to the Router RPC server?
 It would go to a separate JIRA though.
{quote}
Fairness at user level can still be enabled via FairCallQueue. We don't need to 
add anything separate from Router's perspective. With HADOOP-16268 already 
checked in, fairness along with balancing across routers is taken care of to a 
large extent.
  

> RBF: Improved isolation for downstream name nodes. {Static}
> ---
>
> Key: HDFS-14090
> URL: https://issues.apache.org/jira/browse/HDFS-14090
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14090-HDFS-13891.001.patch, 
> HDFS-14090-HDFS-13891.002.patch, HDFS-14090-HDFS-13891.003.patch, 
> HDFS-14090-HDFS-13891.004.patch, HDFS-14090-HDFS-13891.005.patch, 
> HDFS-14090.006.patch, HDFS-14090.007.patch, HDFS-14090.008.patch, 
> HDFS-14090.009.patch, HDFS-14090.010.patch, HDFS-14090.011.patch, RBF_ 
> Isolation design.pdf
>
>
> Router is a gateway to underlying name nodes. Gateway architectures, should 
> help minimize impact of clients connecting to healthy clusters vs unhealthy 
> clusters.
> For example - If there are 2 name nodes downstream, and one of them is 
> heavily loaded with calls spiking rpc queue times, due to back pressure the 
> same with start reflecting on the router. As a result of this, clients 
> connecting to healthy/faster name nodes will also slow down as same rpc queue 
> is maintained for all calls at the router layer. Essentially the same IPC 
> thread pool is used by router to connect to all name nodes.
> Currently router uses one single rpc queue for all calls. Lets discuss how we 
> can change the architecture and add some throttling logic for 
> unhealthy/slow/overloaded name nodes.
> One way could be to read from current call queue, immediately identify 
> downstream name node and maintain a separate queue for each underlying name 
> node. Another simpler way is to maintain some sort of rate limiter configured 
> for each name node and let routers drop/reject/send error requests after 
> certain threshold. 
> This won’t be a simple change as router’s ‘Server’ layer would need redesign 
> and implementation. Currently this layer is the same as name node.
> Opening this ticket to discuss, design and implement this feature.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14825) [Dynamometer] Workload doesn't start unless an absolute path of Mapper class given

2019-09-06 Thread Erik Krogen (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924425#comment-16924425
 ] 

Erik Krogen commented on HDFS-14825:


Thanks for filing this [~soyamiyoshi]! I agree that the PR you mentioned should 
fix this.

> [Dynamometer] Workload doesn't start unless an absolute path of Mapper class 
> given
> --
>
> Key: HDFS-14825
> URL: https://issues.apache.org/jira/browse/HDFS-14825
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Soya Miyoshi
>Priority: Major
>
> When starting a workload by start-workload.sh, unless an absolute path of 
> Mapper is given, the workload doesn't start.
>  
> {code:java}
> $ hadoop/tools/dynamometer/dynamometer-workload/bin/start-workload.sh - \
> Dauditreplay.input-path=hdfs:///user/souya/input/audit  \
> -Dauditreplay.output-path=hdfs:///user/souya/results/ \
> -Dauditreplay.num-threads=50 -Dauditreplay.log-start-time.ms=5 \
> -nn_uri hdfs://namenode_address:port/ \
> -mapper_class_name AuditReplayMapper
> {code}
> results in
> {code:java}
> SLF4J: Defaulting to no-operation (NOP) logger implementation
> SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
> details.
> Exception in thread "main" java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.tools.dynamometer.workloadgenerator.AuditReplayMapper not 
> found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2572)
>   at 
> org.apache.hadoop.tools.dynamometer.workloadgenerator.WorkloadDriver.getMapperClass(WorkloadDriver.java:183)
>   at 
> org.apache.hadoop.tools.dynamometer.workloadgenerator.WorkloadDriver.run(WorkloadDriver.java:127)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at 
> org.apache.hadoop.tools.dynamometer.workloadgenerator.WorkloadDriver.main(WorkloadDriver.java:172)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14754) Erasure Coding : The number of Under-Replicated Blocks never reduced

2019-09-06 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14754:
-
Attachment: HDFS-14754.005.patch

> Erasure Coding :  The number of Under-Replicated Blocks never reduced
> -
>
> Key: HDFS-14754
> URL: https://issues.apache.org/jira/browse/HDFS-14754
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Critical
> Attachments: HDFS-14754.001.patch, HDFS-14754.002.patch, 
> HDFS-14754.003.patch, HDFS-14754.004.patch, HDFS-14754.005.patch
>
>
> Using EC RS-3-2, 6 DN 
> We came accross a scenario where in the EC 5 blocks , same block is 
> replicated thrice and two blocks got missing
> Replicated block was not deleting and missing block is not able to ReConstruct



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14090) RBF: Improved isolation for downstream name nodes. {Static}

2019-09-06 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924413#comment-16924413
 ] 

Íñigo Goiri commented on HDFS-14090:


BTW, should we also add the fairness per user to the Router RPC server?
It would go to a separate JIRA though.

> RBF: Improved isolation for downstream name nodes. {Static}
> ---
>
> Key: HDFS-14090
> URL: https://issues.apache.org/jira/browse/HDFS-14090
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14090-HDFS-13891.001.patch, 
> HDFS-14090-HDFS-13891.002.patch, HDFS-14090-HDFS-13891.003.patch, 
> HDFS-14090-HDFS-13891.004.patch, HDFS-14090-HDFS-13891.005.patch, 
> HDFS-14090.006.patch, HDFS-14090.007.patch, HDFS-14090.008.patch, 
> HDFS-14090.009.patch, HDFS-14090.010.patch, HDFS-14090.011.patch, RBF_ 
> Isolation design.pdf
>
>
> Router is a gateway to underlying name nodes. Gateway architectures, should 
> help minimize impact of clients connecting to healthy clusters vs unhealthy 
> clusters.
> For example - If there are 2 name nodes downstream, and one of them is 
> heavily loaded with calls spiking rpc queue times, due to back pressure the 
> same with start reflecting on the router. As a result of this, clients 
> connecting to healthy/faster name nodes will also slow down as same rpc queue 
> is maintained for all calls at the router layer. Essentially the same IPC 
> thread pool is used by router to connect to all name nodes.
> Currently router uses one single rpc queue for all calls. Lets discuss how we 
> can change the architecture and add some throttling logic for 
> unhealthy/slow/overloaded name nodes.
> One way could be to read from current call queue, immediately identify 
> downstream name node and maintain a separate queue for each underlying name 
> node. Another simpler way is to maintain some sort of rate limiter configured 
> for each name node and let routers drop/reject/send error requests after 
> certain threshold. 
> This won’t be a simple change as router’s ‘Server’ layer would need redesign 
> and implementation. Currently this layer is the same as name node.
> Opening this ticket to discuss, design and implement this feature.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1982) Extend SCMNodeManager to support decommission and maintenance states

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1982?focusedWorklogId=308007=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308007
 ]

ASF GitHub Bot logged work on HDDS-1982:


Author: ASF GitHub Bot
Created on: 06/Sep/19 16:44
Start Date: 06/Sep/19 16:44
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1344: HDDS-1982 Extend 
SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#issuecomment-528927862
 
 
   Just a note; Originally DatanodeInfo was based on the HDFS code. Then I 
think we copied and created our own structure. At this point, diverging should 
not be a big deal is what I think.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308007)
Time Spent: 4h 20m  (was: 4h 10m)

> Extend SCMNodeManager to support decommission and maintenance states
> 
>
> Key: HDDS-1982
> URL: https://issues.apache.org/jira/browse/HDDS-1982
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> Currently, within SCM a node can have the following states:
> HEALTHY
> STALE
> DEAD
> DECOMMISSIONING
> DECOMMISSIONED
> The last 2 are not currently used.
> In order to support decommissioning and maintenance mode, we need to extend 
> the set of states a node can have to include decommission and maintenance 
> states.
> It is also important to note that a node decommissioning or entering 
> maintenance can also be HEALTHY, STALE or go DEAD.
> Therefore in this Jira I propose we should model a node state with two 
> different sets of values. The first, is effectively the liveliness of the 
> node, with the following states. This is largely what is in place now:
> HEALTHY
> STALE
> DEAD
> The second is the node operational state:
> IN_SERVICE
> DECOMMISSIONING
> DECOMMISSIONED
> ENTERING_MAINTENANCE
> IN_MAINTENANCE
> That means the overall total number of states for a node is the cross-product 
> of the two above lists, however it probably makes sense to keep the two 
> states seperate internally.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1982) Extend SCMNodeManager to support decommission and maintenance states

2019-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1982?focusedWorklogId=308006=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-308006
 ]

ASF GitHub Bot logged work on HDDS-1982:


Author: ASF GitHub Bot
Created on: 06/Sep/19 16:43
Start Date: 06/Sep/19 16:43
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1344: HDDS-1982 
Extend SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#discussion_r321819701
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/states/NodeStateMap.java
 ##
 @@ -43,7 +45,7 @@
   /**
* Represents the current state of node.
*/
-  private final ConcurrentHashMap> stateMap;
+  private final ConcurrentHashMap stateMap;
 
 Review comment:
   even if you have 15x states, the number of nodes is less. if you have 100 
nodes, there are only 1500 states, and if you have 1000 nodes, it is 15000 
states. It is still trivial to keep these in memory. Here is the real kicker, 
just like we decided not to write all cross products for the NodeState static 
functions, we will end up needing lists of only frequently accessed pattern (in 
mind that would be (in_service, healthy). All other node queries can be 
retrieved by iterating the lists as needed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 308006)
Time Spent: 4h 10m  (was: 4h)

> Extend SCMNodeManager to support decommission and maintenance states
> 
>
> Key: HDDS-1982
> URL: https://issues.apache.org/jira/browse/HDDS-1982
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Currently, within SCM a node can have the following states:
> HEALTHY
> STALE
> DEAD
> DECOMMISSIONING
> DECOMMISSIONED
> The last 2 are not currently used.
> In order to support decommissioning and maintenance mode, we need to extend 
> the set of states a node can have to include decommission and maintenance 
> states.
> It is also important to note that a node decommissioning or entering 
> maintenance can also be HEALTHY, STALE or go DEAD.
> Therefore in this Jira I propose we should model a node state with two 
> different sets of values. The first, is effectively the liveliness of the 
> node, with the following states. This is largely what is in place now:
> HEALTHY
> STALE
> DEAD
> The second is the node operational state:
> IN_SERVICE
> DECOMMISSIONING
> DECOMMISSIONED
> ENTERING_MAINTENANCE
> IN_MAINTENANCE
> That means the overall total number of states for a node is the cross-product 
> of the two above lists, however it probably makes sense to keep the two 
> states seperate internally.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14795) Add Throttler for writing block

2019-09-06 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924393#comment-16924393
 ] 

Íñigo Goiri commented on HDFS-14795:


Thanks [~leosun08] for the patch.
In functionality, it looks good; I would improve readability.
Right now, one has to be really aware of how the BlockConstructionStage stages 
go and the clientName, etc.
I would extract the two ifs and make them functions with a javadoc explaining 
why one is a transfer and why the other is a write:
{code}
if (isTransfer(stage, clientName)) {
  this.throttler = xserver.getTransferThrottler();
} else if(isWrite(stage)) {
  this.throttler = xserver.getWriteThrottler();
}
{code}
Actually, the whole code could be a function {{getThrottler()}}.
As the snippet shows, I would also change the name for {{transferThrottler}}.

Can we also add some test?


> Add Throttler for writing block
> ---
>
> Key: HDFS-14795
> URL: https://issues.apache.org/jira/browse/HDFS-14795
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14795.001.patch, HDFS-14795.002.patch
>
>
> DataXceiver#writeBlock
> {code:java}
> blockReceiver.receiveBlock(mirrorOut, mirrorIn, replyOut,
> mirrorAddr, null, targets, false);
> {code}
> As above code, DataXceiver#writeBlock doesn't throttler.
>  I think it is necessary to throttle for writing block, while add throttler 
> in stage of PIPELINE_SETUP_APPEND_RECOVERY or 
> PIPELINE_SETUP_STREAMING_RECOVERY.
> Default throttler value is still null.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14609) RBF: Security should use common AuthenticationFilter

2019-09-06 Thread Chen Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924386#comment-16924386
 ] 

Chen Zhang commented on HDFS-14609:
---

submit patch v4 to fix checkstyle error

> RBF: Security should use common AuthenticationFilter
> 
>
> Key: HDFS-14609
> URL: https://issues.apache.org/jira/browse/HDFS-14609
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: CR Hota
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14609.001.patch, HDFS-14609.002.patch, 
> HDFS-14609.003.patch, HDFS-14609.004.patch
>
>
> We worked on router based federation security as part of HDFS-13532. We kept 
> it compatible with the way namenode works. However with HADOOP-16314 and 
> HDFS-16354 in trunk, auth filters seems to have been changed causing tests to 
> fail.
> Changes are needed appropriately in RBF, mainly fixing broken tests.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2096) Ozone ACL document missing AddAcl API

2019-09-06 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HDDS-2096:


 Summary: Ozone ACL document missing AddAcl API
 Key: HDDS-2096
 URL: https://issues.apache.org/jira/browse/HDDS-2096
 Project: Hadoop Distributed Data Store
  Issue Type: Test
Reporter: Xiaoyu Yao


Current Ozone Native ACL APIs document looks like below, the AddAcl is missing.

 
h3. Ozone Native ACL APIs

The ACLs can be manipulated by a set of APIs supported by Ozone. The APIs 
supported are:
 # *SetAcl* – This API will take user principal, the name, type of the ozone 
object and a list of ACLs.
 # *GetAcl* – This API will take the name and type of the ozone object and will 
return a list of ACLs.
 # *RemoveAcl* - This API will take the name, type of the ozone object and the 
ACL that has to be removed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14609) RBF: Security should use common AuthenticationFilter

2019-09-06 Thread Chen Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang updated HDFS-14609:
--
Attachment: (was: HDFS-14609.003.patch)

> RBF: Security should use common AuthenticationFilter
> 
>
> Key: HDFS-14609
> URL: https://issues.apache.org/jira/browse/HDFS-14609
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: CR Hota
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14609.001.patch, HDFS-14609.002.patch, 
> HDFS-14609.003.patch, HDFS-14609.004.patch
>
>
> We worked on router based federation security as part of HDFS-13532. We kept 
> it compatible with the way namenode works. However with HADOOP-16314 and 
> HDFS-16354 in trunk, auth filters seems to have been changed causing tests to 
> fail.
> Changes are needed appropriately in RBF, mainly fixing broken tests.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14827) RBF: Shared DN should display all info's in Router DataNode UI

2019-09-06 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924385#comment-16924385
 ] 

Íñigo Goiri commented on HDFS-14827:


To be honest, the current approach of prepending the subcluster id is not the 
most intuitive.
I think we should create a new method that would give the DNs per subcluster 
and then aggregate from there.
Then a node that is in two subclusters could show twice and we just need to 
make it clear that is the same.

> RBF: Shared DN should display all info's in Router DataNode UI
> --
>
> Key: HDFS-14827
> URL: https://issues.apache.org/jira/browse/HDFS-14827
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14609) RBF: Security should use common AuthenticationFilter

2019-09-06 Thread Chen Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang updated HDFS-14609:
--
Attachment: HDFS-14609.004.patch

> RBF: Security should use common AuthenticationFilter
> 
>
> Key: HDFS-14609
> URL: https://issues.apache.org/jira/browse/HDFS-14609
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: CR Hota
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14609.001.patch, HDFS-14609.002.patch, 
> HDFS-14609.003.patch, HDFS-14609.004.patch
>
>
> We worked on router based federation security as part of HDFS-13532. We kept 
> it compatible with the way namenode works. However with HADOOP-16314 and 
> HDFS-16354 in trunk, auth filters seems to have been changed causing tests to 
> fail.
> Changes are needed appropriately in RBF, mainly fixing broken tests.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14754) Erasure Coding : The number of Under-Replicated Blocks never reduced

2019-09-06 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924330#comment-16924330
 ] 

Hadoop QA commented on HDFS-14754:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m  0s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 475 unchanged - 
0 fixed = 476 total (was 475) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 110 unchanged - 0 fixed = 111 total (was 110) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 22s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}154m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14754 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979665/HDFS-14754.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 38e047321941 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d98c548 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27801/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27801/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Commented] (HDFS-14609) RBF: Security should use common AuthenticationFilter

2019-09-06 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924298#comment-16924298
 ] 

Hadoop QA commented on HDFS-14609:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 
41s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.2 Server=19.03.2 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14609 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979666/HDFS-14609.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cb997fd56e39 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d98c548 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27802/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27802/testReport/ |
| Max. process+thread count | 1599 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27802/console |
| Powered by | Apache Yetus 0.8.0   

[jira] [Commented] (HDFS-14528) Failover from Active to Standby Failed

2019-09-06 Thread Ravuri Sushma sree (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924289#comment-16924289
 ] 

Ravuri Sushma sree commented on HDFS-14528:
---

Thanks [~csun] ,

No, adding remote host twice wasnt intended. I will upload the patch correcting 
the same .

> Failover from Active to Standby Failed  
> 
>
> Key: HDFS-14528
> URL: https://issues.apache.org/jira/browse/HDFS-14528
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Reporter: Ravuri Sushma sree
>Assignee: Ravuri Sushma sree
>Priority: Major
> Attachments: HDFS-14528.003.patch, HDFS-14528.2.Patch, 
> ZKFC_issue.patch
>
>
>  *In a cluster with more than one Standby namenode, manual failover throws 
> exception for some cases*
> *When trying to exectue the failover command from active to standby* 
> *._/hdfs haadmin  -failover nn1 nn2, below Exception is thrown_*
>   Operation failed: Call From X-X-X-X/X-X-X-X to Y-Y-Y-Y: failed on 
> connection exception: java.net.ConnectException: Connection refused
> This is encountered in the following cases :
>  Scenario 1 : 
> Namenodes - NN1(Active) , NN2(Standby), NN3(Standby)
> When trying to manually failover from NN1 to NN2 if NN3 is down, Exception is 
> thrown
> Scenario 2 :
>  Namenodes - NN1(Active) , NN2(Standby), NN3(Standby)
> ZKFC's -              ZKFC1,            ZKFC2,            ZKFC3
> When trying to manually failover using NN1 to NN3 if NN3's ZKFC (ZKFC3) is 
> down, Exception is thrown



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14528) Failover from Active to Standby Failed

2019-09-06 Thread Ravuri Sushma sree (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravuri Sushma sree updated HDFS-14528:
--
Description: 
 *In a cluster with more than one Standby namenode, manual failover throws 
exception for some cases*

*When trying to exectue the failover command from active to standby* 

*._/hdfs haadmin  -failover nn1 nn2, below Exception is thrown_*

  Operation failed: Call From X-X-X-X/X-X-X-X to Y-Y-Y-Y: failed on 
connection exception: java.net.ConnectException: Connection refused

This is encountered in the following cases :

 Scenario 1 : 

Namenodes - NN1(Active) , NN2(Standby), NN3(Standby)

When trying to manually failover from NN1 to NN2 if NN3 is down, Exception is 
thrown

Scenario 2 :

 Namenodes - NN1(Active) , NN2(Standby), NN3(Standby)

ZKFC's -              ZKFC1,            ZKFC2,            ZKFC3

When trying to manually failover using NN1 to NN3 if NN3's ZKFC (ZKFC3) is 
down, Exception is thrown

  was:
 *In a cluster with more than one Standby namenode, manual failover throws 
exception for some cases*

*When trying to exectue the failover command from active to standby* 

*._/hdfs haadmin  -failover nn1 nn2, below Exception is thrown_*

  Operation failed: Call From X-X-X-X/X-X-X-X to Y-Y-Y-Y: failed on 
connection exception: java.net.ConnectException: Connection refused

This is encountered in the following cases :

 Scenario 1 : 

Namenodes - NN1(Active) , NN2(Standby), NN3(Standby)

When trying to manually failover from NN1 TO NN2 if NN3 is down, Exception is 
thrown

Scenario 2 :

 Namenodes - NN1(Active) , NN2(Standby), NN3(Standby)

ZKFC's -              ZKFC1,            ZKFC2,            ZKFC3

When trying to manually failover using NN1 to NN3 if NN3's ZKFC (ZKFC3) is 
down, Exception is thrown


> Failover from Active to Standby Failed  
> 
>
> Key: HDFS-14528
> URL: https://issues.apache.org/jira/browse/HDFS-14528
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Reporter: Ravuri Sushma sree
>Assignee: Ravuri Sushma sree
>Priority: Major
> Attachments: HDFS-14528.003.patch, HDFS-14528.2.Patch, 
> ZKFC_issue.patch
>
>
>  *In a cluster with more than one Standby namenode, manual failover throws 
> exception for some cases*
> *When trying to exectue the failover command from active to standby* 
> *._/hdfs haadmin  -failover nn1 nn2, below Exception is thrown_*
>   Operation failed: Call From X-X-X-X/X-X-X-X to Y-Y-Y-Y: failed on 
> connection exception: java.net.ConnectException: Connection refused
> This is encountered in the following cases :
>  Scenario 1 : 
> Namenodes - NN1(Active) , NN2(Standby), NN3(Standby)
> When trying to manually failover from NN1 to NN2 if NN3 is down, Exception is 
> thrown
> Scenario 2 :
>  Namenodes - NN1(Active) , NN2(Standby), NN3(Standby)
> ZKFC's -              ZKFC1,            ZKFC2,            ZKFC3
> When trying to manually failover using NN1 to NN3 if NN3's ZKFC (ZKFC3) is 
> down, Exception is thrown



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >