[jira] [Created] (HDFS-13543) when datanode have some unmounted disks, disk balancer should skip these disks not throw IllegalArgumentException

2018-05-09 Thread luoge123 (JIRA)
luoge123 created HDFS-13543:
---

 Summary: when datanode have some unmounted disks, disk balancer 
should skip these disks not throw IllegalArgumentException
 Key: HDFS-13543
 URL: https://issues.apache.org/jira/browse/HDFS-13543
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: diskbalancer
Affects Versions: 3.0.0
Reporter: luoge123


when datanode has an unmounted disk, disk balancer get disk capacity from 
report is zero, this will case getVolumeInfoFromStorageReports throw 
IllegalArgumentException
{code:java}
java.lang.IllegalArgumentException
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
at 
org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerVolume.setUsed(DiskBalancerVolume.java:268)
at 
org.apache.hadoop.hdfs.server.diskbalancer.connectors.DBNameNodeConnector.getVolumeInfoFromStorageReports(DBNameNodeConnector.java:148)
at 
org.apache.hadoop.hdfs.server.diskbalancer.connectors.DBNameNodeConnector.getNodes(DBNameNodeConnector.java:90)
at 
org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerCluster.readClusterInfo(DiskBalancerCluster.java:133)
at 
org.apache.hadoop.hdfs.server.diskbalancer.command.Command.readClusterInfo(Command.java:123)
at 
org.apache.hadoop.hdfs.server.diskbalancer.command.ReportCommand.execute(ReportCommand.java:74)
at 
org.apache.hadoop.hdfs.tools.DiskBalancerCLI.dispatch(DiskBalancerCLI.java:468)
at org.apache.hadoop.hdfs.tools.DiskBalancerCLI.run(DiskBalancerCLI.java:183)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.hdfs.tools.DiskBalancerCLI.main(DiskBalancerCLI.java:164)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-05-09 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469901#comment-16469901
 ] 

Bibin A Chundatt edited comment on HDFS-13388 at 5/10/18 6:26 AM:
--

 

[~elgoiri]

branch-3 compilation is broken. Backport of HDFS-12813 should fix compilation. 
{code:java}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on 
project hadoop-hdfs-client: Compilation failure: Compilation failure: 
[ERROR] 
/opt/apacheprojects/hadoop/FORCOMMIT/branch3/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java:[79,11]
 cannot find symbol
[ERROR]   symbol:   variable currentUsedProxy
[ERROR]   location: class 
org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider.RequestHedgingInvocationHandler
[ERROR] 
/opt/apacheprojects/hadoop/FORCOMMIT/branch3/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java:[81,41]
 cannot find symbol
[ERROR]   symbol:   variable currentUsedProxy
[ERROR]   location: class 
org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider.RequestHedgingInvocationHandler
[ERROR] 
/opt/apacheprojects/hadoop/FORCOMMIT/branch3/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java:[83,15]
 cannot find symbol
[ERROR]   symbol:   variable currentUsedProxy
[ERROR]   location: class 
org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider.RequestHedgingInvocationHandler
[ERROR] 
/opt/apacheprojects/hadoop/FORCOMMIT/branch3/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java:[85,18]
 cannot find symbol
[ERROR]   symbol:   class InvocationTargetException
[ERROR]   location: class 
org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider.RequestHedgingInvocationHandler
[ERROR] 
/opt/apacheprojects/hadoop/FORCOMMIT/branch3/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java:[87,49]
 cannot find symbol
[ERROR]   symbol:   variable currentUsedProxy
[ERROR]   location: class 
org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider.RequestHedgingInvocationHandler
[ERROR] 
/opt/apacheprojects/hadoop/FORCOMMIT/branch3/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java:[89,15]
 cannot find symbol
[ERROR]   symbol:   variable currentUsedProxy
[ERROR]   location: class 
org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider.RequestHedgingInvocationHandler
[ERROR] -> [Help 1]

{code}


was (Author: bibinchundatt):
[~yzhangal]

branch-3 compilation is broken 

{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on 
project hadoop-hdfs-client: Compilation failure: Compilation failure: 
[ERROR] 
/opt/apacheprojects/hadoop/FORCOMMIT/branch3/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java:[79,11]
 cannot find symbol
[ERROR]   symbol:   variable currentUsedProxy
[ERROR]   location: class 
org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider.RequestHedgingInvocationHandler
[ERROR] 
/opt/apacheprojects/hadoop/FORCOMMIT/branch3/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java:[81,41]
 cannot find symbol
[ERROR]   symbol:   variable currentUsedProxy
[ERROR]   location: class 
org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider.RequestHedgingInvocationHandler
[ERROR] 
/opt/apacheprojects/hadoop/FORCOMMIT/branch3/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java:[83,15]
 cannot find symbol
[ERROR]   symbol:   variable currentUsedProxy
[ERROR]   location: class 
org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider.RequestHedgingInvocationHandler
[ERROR] 
/opt/apacheprojects/hadoop/FORCOMMIT/branch3/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java:[85,18]
 cannot find symbol
[ERROR]   symbol:   class InvocationTargetException
[ERROR]   location: class 
org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider.RequestHedgingInvocationHandler
[ERROR] 
/opt/apacheprojects/hadoop/FORCOMMIT/branch3/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java:[87,49]
 cannot find symbol
[ERROR]   symbol:   variable cu

[jira] [Commented] (HDDS-18) Ozone Shell should use RestClient and RpcClient

2018-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-18?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469971#comment-16469971
 ] 

Hudson commented on HDDS-18:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14156 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14156/])
HDDS-18. Ozone Shell should use RestClient and RpcClient. Contributed by 
(aengineer: rev 46e0f2786259ecd25e9a5a1e51b667f9c32e5c56)
* (edit) 
hadoop-tools/hadoop-ozone/src/main/java/org/apache/hadoop/fs/ozone/OzoneFileSystem.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/ListKeyHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/ListVolumeHandler.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rest/RestClient.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/ListBucketHandler.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/web/client/OzoneRestClient.java
* (add) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientException.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/ObjectStore.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientFactory.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/Handler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/DeleteBucketHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/DeleteVolumeHandler.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestVolume.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneVolume.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/CreateVolumeHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/UpdateVolumeHandler.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
* (edit) 
hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
* (delete) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/web/client/OzoneRestClientException.java
* (edit) 
hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/OzoneContract.java
* (edit) hadoop-ozone/acceptance-test/src/test/compose/docker-compose.yaml
* (add) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientUtils.java
* (edit) 
hadoop-ozone/acceptance-test/src/test/robotframework/acceptance/ozone.robot
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/web/client/OzoneVolume.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/PutKeyHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/InfoBucketHandler.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestBuckets.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/UpdateBucketHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/DeleteKeyHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/InfoVolumeHandler.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/web/client/OzoneBucket.java
* (edit) 
hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/CreateBucketHandler.java
* (edit) 
hadoop-tools/hadoop-ozone/src/main/java/org/apache/hadoop/fs/ozone/OzFs.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/InfoKeyHandler.java
* (edit) 
hadoop-tools/hadoop-ozone/src/main/java/org/apache/hadoop/fs/ozone/Constants.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/GetKeyHandler.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java


> Ozone Shell should use RestClient and RpcClient
> ---
>
> Key: HDDS-18
> URL: https://issues.apache.org/jira/browse/HDDS-18
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachm

[jira] [Updated] (HDDS-18) Ozone Shell should use RestClient and RpcClient

2018-05-09 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-18?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-18:
-
   Resolution: Fixed
Fix Version/s: 0.2.1
   Status: Resolved  (was: Patch Available)

[~nandakumar131] Thanks for the review, [~ljain] Thanks for the contribution.  
I have committed this feature to the trunk.

> Ozone Shell should use RestClient and RpcClient
> ---
>
> Key: HDDS-18
> URL: https://issues.apache.org/jira/browse/HDDS-18
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-18.001.patch, HDDS-18.002.patch, HDDS-18.003.patch, 
> HDFS-13431-HDFS-7240.001.patch, HDFS-13431-HDFS-7240.002.patch, 
> HDFS-13431-HDFS-7240.003.patch, HDFS-13431.001.patch, HDFS-13431.002.patch
>
>
> Currently Ozone Shell uses OzoneRestClient. We should use both RestClient and 
> RpcClient instead of OzoneRestClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13539) DFSInputStream NPE when reportCheckSumFailure

2018-05-09 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469947#comment-16469947
 ] 

Xiao Chen commented on HDFS-13539:
--

Failed tests look unrelated and passed locally. Will commit by end of Thursday 
if no further comments.

> DFSInputStream NPE when reportCheckSumFailure
> -
>
> Key: HDFS-13539
> URL: https://issues.apache.org/jira/browse/HDFS-13539
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13539.01.patch, HDFS-13539.02.patch
>
>
> We have seem the following exception with DFSStripedInputStream.
> {noformat}
> readDirect: FSDataInputStream#read error:
> NullPointerException: java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:402)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:147)
> {noformat}
> Line 402 is {{reportCheckSumFailure}}, and {{currentLocatedBlock}} is the 
> only possible null object. (Because {{currentLocatedBlock.getLocations()}} 
> cannot be null - {{LocatedBlock}} constructor checks {{locs}} and would 
> assign a {{EMPTY_LOCS}} if it's null)
> Original exception is masked by the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-5) Enable OzoneManager kerberos auth

2018-05-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-5?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469908#comment-16469908
 ] 

genericqa commented on HDDS-5:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
45s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
47s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
11s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
18s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
57s{color} | {color:red} hadoop-hdds/common in HDDS-4 has 1 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
40s{color} | {color:red} hadoop-ozone/ozone-manager in HDDS-4 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
21s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 16s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License w

[jira] [Updated] (HDDS-3) When datanodes register, send NodeReport and ContainerReport

2018-05-09 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-3?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-3:

Summary: When datanodes register, send NodeReport and ContainerReport  
(was: Ozone: When datanodes register, send NodeReport and ContainerReport)

> When datanodes register, send NodeReport and ContainerReport
> 
>
> Key: HDDS-3
> URL: https://issues.apache.org/jira/browse/HDDS-3
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode, SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDFS-13432-HDFS-7240.00.patch, HDFS-13432.01.patch
>
>
> From chillmode Deisgn Notes:
> As part of this Jira, will update register to send NodeReport and 
> ContaineReport.
> Current Datanodes, send one heartbeat per 30 seconds. That means that even if 
> the datanode is ready it will take around a 1 min or longer before the SCM 
> sees the datanode container reports. We can address this partially by making 
> sure that Register call contains both NodeReport and ContainerReport.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-19) Update ozone to latest ratis snapshot build (0.1.1-alpha-4309324-SNAPSHOT)

2018-05-09 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-19?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-19:
-
Summary: Update ozone to latest ratis snapshot build 
(0.1.1-alpha-4309324-SNAPSHOT)  (was: Ozone: Update ozone to latest ratis 
snapshot build (0.1.1-alpha-4309324-SNAPSHOT))

> Update ozone to latest ratis snapshot build (0.1.1-alpha-4309324-SNAPSHOT)
> --
>
> Key: HDDS-19
> URL: https://issues.apache.org/jira/browse/HDDS-19
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-19.001.patch, HDFS-13456-HDFS-7240.001.patch, 
> HDFS-13456-HDFS-7240.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-16) Remove Pipeline from Datanode Container Protocol protobuf definition.

2018-05-09 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-16?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-16:
-
Summary: Remove Pipeline from Datanode Container Protocol protobuf 
definition.  (was: Ozone: Remove Pipeline from Datanode Container Protocol 
protobuf definition.)

> Remove Pipeline from Datanode Container Protocol protobuf definition.
> -
>
> Key: HDDS-16
> URL: https://issues.apache.org/jira/browse/HDDS-16
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Native, Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-16.001.patch
>
>
> The current Ozone code passes pipeline information to datanodes as well. 
> However datanodes do not use this information.
> Hence Pipeline should be removed from ozone datanode commands.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-18) Ozone Shell should use RestClient and RpcClient

2018-05-09 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-18?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-18:
-
Summary: Ozone Shell should use RestClient and RpcClient  (was: Ozone: 
Ozone Shell should use RestClient and RpcClient)

> Ozone Shell should use RestClient and RpcClient
> ---
>
> Key: HDDS-18
> URL: https://issues.apache.org/jira/browse/HDDS-18
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-18.001.patch, HDDS-18.002.patch, HDDS-18.003.patch, 
> HDFS-13431-HDFS-7240.001.patch, HDFS-13431-HDFS-7240.002.patch, 
> HDFS-13431-HDFS-7240.003.patch, HDFS-13431.001.patch, HDFS-13431.002.patch
>
>
> Currently Ozone Shell uses OzoneRestClient. We should use both RestClient and 
> RpcClient instead of OzoneRestClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-05-09 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469901#comment-16469901
 ] 

Bibin A Chundatt commented on HDFS-13388:
-

[~yzhangal]

branch-3 compilation is broken 

{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on 
project hadoop-hdfs-client: Compilation failure: Compilation failure: 
[ERROR] 
/opt/apacheprojects/hadoop/FORCOMMIT/branch3/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java:[79,11]
 cannot find symbol
[ERROR]   symbol:   variable currentUsedProxy
[ERROR]   location: class 
org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider.RequestHedgingInvocationHandler
[ERROR] 
/opt/apacheprojects/hadoop/FORCOMMIT/branch3/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java:[81,41]
 cannot find symbol
[ERROR]   symbol:   variable currentUsedProxy
[ERROR]   location: class 
org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider.RequestHedgingInvocationHandler
[ERROR] 
/opt/apacheprojects/hadoop/FORCOMMIT/branch3/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java:[83,15]
 cannot find symbol
[ERROR]   symbol:   variable currentUsedProxy
[ERROR]   location: class 
org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider.RequestHedgingInvocationHandler
[ERROR] 
/opt/apacheprojects/hadoop/FORCOMMIT/branch3/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java:[85,18]
 cannot find symbol
[ERROR]   symbol:   class InvocationTargetException
[ERROR]   location: class 
org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider.RequestHedgingInvocationHandler
[ERROR] 
/opt/apacheprojects/hadoop/FORCOMMIT/branch3/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java:[87,49]
 cannot find symbol
[ERROR]   symbol:   variable currentUsedProxy
[ERROR]   location: class 
org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider.RequestHedgingInvocationHandler
[ERROR] 
/opt/apacheprojects/hadoop/FORCOMMIT/branch3/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java:[89,15]
 cannot find symbol
[ERROR]   symbol:   variable currentUsedProxy
[ERROR]   location: class 
org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider.RequestHedgingInvocationHandler
[ERROR] -> [Help 1]

{code}

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.3
>
> Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, 
> HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, 
> HADOOP-13388.0006.patch, HADOOP-13388.0007.patch, HADOOP-13388.0008.patch, 
> HADOOP-13388.0009.patch, HADOOP-13388.0010.patch, HADOOP-13388.0011.patch, 
> HADOOP-13388.0012.patch, HADOOP-13388.0013.patch, HADOOP-13388.0014.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-18) Ozone: Ozone Shell should use RestClient and RpcClient

2018-05-09 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-18?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469899#comment-16469899
 ] 

Anu Engineer commented on HDDS-18:
--

[~ljain] Thanks for taking care of this. [~nandakumar131] Thanks for review. I 
will commit this patch now.

 

> Ozone: Ozone Shell should use RestClient and RpcClient
> --
>
> Key: HDDS-18
> URL: https://issues.apache.org/jira/browse/HDDS-18
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-18.001.patch, HDDS-18.002.patch, HDDS-18.003.patch, 
> HDFS-13431-HDFS-7240.001.patch, HDFS-13431-HDFS-7240.002.patch, 
> HDFS-13431-HDFS-7240.003.patch, HDFS-13431.001.patch, HDFS-13431.002.patch
>
>
> Currently Ozone Shell uses OzoneRestClient. We should use both RestClient and 
> RpcClient instead of OzoneRestClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-19) Ozone: Update ozone to latest ratis snapshot build (0.1.1-alpha-4309324-SNAPSHOT)

2018-05-09 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-19?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-19:
-
Status: Open  (was: Patch Available)

Cancelling this patch since we will need a new RC from ra

> Ozone: Update ozone to latest ratis snapshot build 
> (0.1.1-alpha-4309324-SNAPSHOT)
> -
>
> Key: HDDS-19
> URL: https://issues.apache.org/jira/browse/HDDS-19
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-19.001.patch, HDFS-13456-HDFS-7240.001.patch, 
> HDFS-13456-HDFS-7240.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-19) Ozone: Update ozone to latest ratis snapshot build (0.1.1-alpha-4309324-SNAPSHOT)

2018-05-09 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-19?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469896#comment-16469896
 ] 

Anu Engineer edited comment on HDDS-19 at 5/10/18 4:19 AM:
---

Cancelling this patch since we will need a new RC from ratis


was (Author: anu):
Cancelling this patch since we will need a new RC from ra

> Ozone: Update ozone to latest ratis snapshot build 
> (0.1.1-alpha-4309324-SNAPSHOT)
> -
>
> Key: HDDS-19
> URL: https://issues.apache.org/jira/browse/HDDS-19
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-19.001.patch, HDFS-13456-HDFS-7240.001.patch, 
> HDFS-13456-HDFS-7240.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-38) Add SCMNodeStorage map in SCM class to store storage statistics per Datanode

2018-05-09 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-38?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469890#comment-16469890
 ] 

Anu Engineer commented on HDDS-38:
--

[~shashikant] Thanks for taking care of this. Some high level thoughts.
 # We have to take care of two issues.
 ## Running out of space on a Node.
 ## Running out of space on Volume.
 # I am guessing that this patch takes care of the node issue. It is good and 
we can commit.
 # But before we do that, I wanted to see how we are going to take care of the 
Volume issue.
 ## To do that, Datanode has to send us a list of volumes and capacity and used.
 ## Should we aggregate that for each Node and infer this info, or do we still 
want the data node send both these reports seperate.
 ## I am ok, with NodeReport and aggregation over SCM stats – Just a question 
in my mind.

Otherwise patch looks good to me. One minor nit:

Would we please rename +private SCMNodeStat scmSta+t  to something like 
clusterStat.

> Add SCMNodeStorage map in SCM class to store storage statistics per Datanode
> 
>
> Key: HDDS-38
> URL: https://issues.apache.org/jira/browse/HDDS-38
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-38.00.patch
>
>
> Currently , the storage stats per Datanode are maintained inside 
> scmNodeManager. This will
> move the scmNodeStats for storage outside SCMNodeManager to simplify 
> refactoring.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13542) TestBlockManager#testNeededReplicationWhileAppending fails due to improper cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows

2018-05-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469867#comment-16469867
 ] 

genericqa commented on HDFS-13542:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 53 unchanged - 0 fixed = 54 total (was 53) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 31s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestPread |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13542 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922764/HDFS-13542.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 70573ca17bda 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 914b98a7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24170/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24170/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt

[jira] [Commented] (HDFS-13346) RBF: Fix synchronization of router quota and ns quota

2018-05-09 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469854#comment-16469854
 ] 

Yiqun Lin commented on HDFS-13346:
--

[~elgoiri], would you mind having a look for this?
[~liuhongtong], does this patch address your case now?

> RBF: Fix synchronization of router quota and ns quota
> -
>
> Key: HDFS-13346
> URL: https://issues.apache.org/jira/browse/HDFS-13346
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: liuhongtong
>Assignee: Yiqun Lin
>Priority: Major
>  Labels: incompatible
> Attachments: HDFS-13346.001.patch, HDFS-13346.002.patch, 
> HDFS-13346.003.patch, HDFS-13346.004.patch, HDFS-13346.005.patch
>
>
> Check Router Quota and ns Quota:
> {code}
> $ hdfs dfsrouteradmin -ls /ns10t
> Mount Table Entries:
> SourceDestinations  Owner 
> Group Mode  Quota/Usage  
> /ns10tns10->/ns10t  hadp  
> hadp  rwxr-xr-x [NsQuota: 150/319, 
> SsQuota: -/-]
> /ns10t/ns1mountpoint  ns1->/a/tthadp  
> hadp  rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> $ hdfs dfs -count -q hdfs://ns10/ns10t
>  150-155none inf3 
>  302  0 hdfs://ns10/ns10t
> {code}
> Update Router Quota:
> {code:java}
> $ hdfs dfsrouteradmin -setQuota /ns10t -nsQuota 400
> Successfully set quota for mount point /ns10t
> {code}
> Check Router Quota and ns Quota:
> {code:java}
> $ hdfs dfsrouteradmin -ls /ns10t
> Mount Table Entries:
> SourceDestinations  Owner 
> Group Mode  Quota/Usage  
> /ns10tns10->/ns10t  hadp  
> hadp  rwxr-xr-x [NsQuota: 400/319, 
> SsQuota: -/-]
> /ns10t/ns1mountpoint  ns1->/a/tthadp  
> hadp  rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> $ hdfs dfs -count -q hdfs://ns10/ns10t
>  150-155none inf3 
>  302  0 hdfs://ns10/ns10t
> {code}
> Now Router Quota has updated successfully, but ns Quota not.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-5) Enable OzoneManager kerberos auth

2018-05-09 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-5?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-5:
--
Attachment: (was: HDDS-5-HDDS-4.01.patch)

> Enable OzoneManager kerberos auth
> -
>
> Key: HDDS-5
> URL: https://issues.apache.org/jira/browse/HDDS-5
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-5-HDDS-4.00.patch, HDDS-5-HDDS-4.01.patch, 
> initial-patch.patch
>
>
> enable KSM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-5) Enable OzoneManager kerberos auth

2018-05-09 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-5?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-5:
--
Attachment: HDDS-5-HDDS-4.01.patch

> Enable OzoneManager kerberos auth
> -
>
> Key: HDDS-5
> URL: https://issues.apache.org/jira/browse/HDDS-5
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-5-HDDS-4.00.patch, HDDS-5-HDDS-4.01.patch, 
> initial-patch.patch
>
>
> enable KSM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-5) Enable OzoneManager kerberos auth

2018-05-09 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-5?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-5:
--
Attachment: HDDS-5-HDDS-4.01.patch

> Enable OzoneManager kerberos auth
> -
>
> Key: HDDS-5
> URL: https://issues.apache.org/jira/browse/HDDS-5
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-5-HDDS-4.00.patch, HDDS-5-HDDS-4.01.patch, 
> initial-patch.patch
>
>
> enable KSM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-5) Enable OzoneManager kerberos auth

2018-05-09 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-5?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469813#comment-16469813
 ] 

Ajay Kumar commented on HDDS-5:
---

Rebased with branch and added test case.

> Enable OzoneManager kerberos auth
> -
>
> Key: HDDS-5
> URL: https://issues.apache.org/jira/browse/HDDS-5
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-5-HDDS-4.00.patch, HDDS-5-HDDS-4.01.patch, 
> initial-patch.patch
>
>
> enable KSM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-10) docker changes to test secure ozone cluster

2018-05-09 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-10?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469777#comment-16469777
 ] 

Xiaoyu Yao commented on HDDS-10:


Thanks [~ajayydv] for working on this. The patch looks good to me overall. Just 
few minor issues.

docker-compose
 * Line 23-24, etc: Can remove the example network related settings from 
docker-compose as it is the same as the default one?
 * Line 19: can we make this name generic like: keberos.kdc?

docker-config
 # Line 48: we should expose the datanode http port 1012 via docker-compose 
file as well.
 # Line 49: port does not match the namenode port in docker-compose file.

> docker changes to test secure ozone cluster
> ---
>
> Key: HDDS-10
> URL: https://issues.apache.org/jira/browse/HDDS-10
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-10-HDDS-4.00.patch
>
>
> Update docker compose and settings to test secure ozone cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-05-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469773#comment-16469773
 ] 

genericqa commented on HDFS-13448:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 25m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 50s{color} | {color:orange} root: The patch generated 3 new + 27 unchanged - 
0 fixed = 30 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 31s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 38s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
45s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}216m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestTrash |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13448 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922711/HDFS-13448.6.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c4efdf27439f 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build too

[jira] [Updated] (HDFS-13542) TestBlockManager#testNeededReplicationWhileAppending fails due to improper cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows

2018-05-09 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13542:
-
Target Version/s: 3.2.0, 2.9.2  (was: 2.9.2)

> TestBlockManager#testNeededReplicationWhileAppending fails due to improper 
> cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows
> 
>
> Key: HDFS-13542
> URL: https://issues.apache.org/jira/browse/HDFS-13542
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: windows
> Attachments: HDFS-13542-branch-2.000.patch, HDFS-13542.000.patch
>
>
> branch-2.9 has failure message on Windows:
> {code:java}
> 2018-05-09 16:26:03,014 [Thread-3533] ERROR hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:initMiniDFSCluster(884)) - IOE creating namenodes. 
> Permissions dump:
> path 
> 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data': 
>  
> absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project
>  permissions: drwx
> path 'E:\OSSHadoop': 
>  absolute:E:\OSSHadoop
>  permissions: drwx
> path 'E:\': 
>  absolute:E:\
>  permissions: drwxjava.io.IOException: Could not fully delete 
> E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1026)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:982)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:879)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:515)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:474)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testNeededReplicationWhileAppending(TestBlockManager.java:465){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13542) TestBlockManager#testNeededReplicationWhileAppending fails due to improper cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows

2018-05-09 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469752#comment-16469752
 ] 

Anbang Hu commented on HDFS-13542:
--

[^HDFS-13542.000.patch] is for trunk.

> TestBlockManager#testNeededReplicationWhileAppending fails due to improper 
> cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows
> 
>
> Key: HDFS-13542
> URL: https://issues.apache.org/jira/browse/HDFS-13542
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: windows
> Attachments: HDFS-13542-branch-2.000.patch, HDFS-13542.000.patch
>
>
> branch-2.9 has failure message on Windows:
> {code:java}
> 2018-05-09 16:26:03,014 [Thread-3533] ERROR hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:initMiniDFSCluster(884)) - IOE creating namenodes. 
> Permissions dump:
> path 
> 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data': 
>  
> absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project
>  permissions: drwx
> path 'E:\OSSHadoop': 
>  absolute:E:\OSSHadoop
>  permissions: drwx
> path 'E:\': 
>  absolute:E:\
>  permissions: drwxjava.io.IOException: Could not fully delete 
> E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1026)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:982)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:879)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:515)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:474)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testNeededReplicationWhileAppending(TestBlockManager.java:465){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13542) TestBlockManager#testNeededReplicationWhileAppending fails due to improper cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows

2018-05-09 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13542:
-
Attachment: HDFS-13542.000.patch

> TestBlockManager#testNeededReplicationWhileAppending fails due to improper 
> cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows
> 
>
> Key: HDFS-13542
> URL: https://issues.apache.org/jira/browse/HDFS-13542
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: windows
> Attachments: HDFS-13542-branch-2.000.patch, HDFS-13542.000.patch
>
>
> branch-2.9 has failure message on Windows:
> {code:java}
> 2018-05-09 16:26:03,014 [Thread-3533] ERROR hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:initMiniDFSCluster(884)) - IOE creating namenodes. 
> Permissions dump:
> path 
> 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data': 
>  
> absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project
>  permissions: drwx
> path 'E:\OSSHadoop': 
>  absolute:E:\OSSHadoop
>  permissions: drwx
> path 'E:\': 
>  absolute:E:\
>  permissions: drwxjava.io.IOException: Could not fully delete 
> E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1026)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:982)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:879)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:515)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:474)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testNeededReplicationWhileAppending(TestBlockManager.java:465){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469745#comment-16469745
 ] 

Hudson commented on HDFS-13537:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14153 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14153/])
HDFS-13537. TestHdfsHelper does not generate jceks path properly for (inigoiri: 
rev 914b98a713f70667d4380ac752f6c4de931520d9)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestHdfsHelper.java


> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13537-branch-2.000.patch, HDFS-13537.000.patch, 
> HDFS-13537.001.patch, image-2018-05-09-16-29-50-976.png, 
> image-2018-05-09-16-31-40-981.png
>
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13542) TestBlockManager#testNeededReplicationWhileAppending fails due to improper cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows

2018-05-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469739#comment-16469739
 ] 

Íñigo Goiri commented on HDFS-13542:


Does this apply to trunk?

> TestBlockManager#testNeededReplicationWhileAppending fails due to improper 
> cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows
> 
>
> Key: HDFS-13542
> URL: https://issues.apache.org/jira/browse/HDFS-13542
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: windows
> Attachments: HDFS-13542-branch-2.000.patch
>
>
> branch-2.9 has failure message on Windows:
> {code:java}
> 2018-05-09 16:26:03,014 [Thread-3533] ERROR hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:initMiniDFSCluster(884)) - IOE creating namenodes. 
> Permissions dump:
> path 
> 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data': 
>  
> absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project
>  permissions: drwx
> path 'E:\OSSHadoop': 
>  absolute:E:\OSSHadoop
>  permissions: drwx
> path 'E:\': 
>  absolute:E:\
>  permissions: drwxjava.io.IOException: Could not fully delete 
> E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1026)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:982)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:879)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:515)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:474)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testNeededReplicationWhileAppending(TestBlockManager.java:465){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13530) NameNode: Fix NullPointerException when getQuotaUsageInt() invoked

2018-05-09 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDFS-13530:
-

Assignee: liuhongtong

> NameNode: Fix NullPointerException when getQuotaUsageInt() invoked
> --
>
> Key: HDFS-13530
> URL: https://issues.apache.org/jira/browse/HDFS-13530
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, namenode
>Reporter: liuhongtong
>Assignee: liuhongtong
>Priority: Major
> Attachments: HDFS-13530.001.patch
>
>
> If the directory is nonexistent, getQuotaUsage rpc call will run into 
> NullPointerException throwed by
> FSDirStatAndListingOp.getQuotaUsageInt() .
> I think FSDirStatAndListingOp.getQuotaUsageInt() should throw 
> FileNotFoundException when the directory is nonexistent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13528) RBF: If a directory exceeds quota limit then quota usage is not refreshed for other mount entries

2018-05-09 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469716#comment-16469716
 ] 

Xiaoyu Yao commented on HDFS-13528:
---

{quote}I have observed one more scenario, when destination directory of a mount 
entry is not present in the file system it is throwing NPE and quota refresh is 
not happening for rest of the entries.
{quote}
[~dibyendu_hadoop], the NPE issue of getQuotaUsage on non-existing directory 
has also been reported in HDFS-13530 and will be fixed. 

Will the RBF quota refresh issue still exist after that?

> RBF: If a directory exceeds quota limit then quota usage is not refreshed for 
> other mount entries 
> --
>
> Key: HDFS-13528
> URL: https://issues.apache.org/jira/browse/HDFS-13528
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Major
> Attachments: HDFS-13528-000.patch
>
>
> If quota limit is exceeded, RouterQuotaUpdateService#periodicInvoke is 
> getting QuotaExceededException and it is not updating the quota usage for 
> rest of the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13537:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.3
   2.9.2
   3.1.1
   3.2.0
   2.10.0
   Status: Resolved  (was: Patch Available)

Thanks [~surmountian], committed to trunk, branch-3.1, branch-3.0, branch-2, 
and branch-2.9. Hopefully the Windows run tomorrow will have far less failures.

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13537-branch-2.000.patch, HDFS-13537.000.patch, 
> HDFS-13537.001.patch, image-2018-05-09-16-29-50-976.png, 
> image-2018-05-09-16-31-40-981.png
>
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13542) TestBlockManager#testNeededReplicationWhileAppending fails due to improper cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows

2018-05-09 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13542:
-
Attachment: HDFS-13542-branch-2.000.patch

> TestBlockManager#testNeededReplicationWhileAppending fails due to improper 
> cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows
> 
>
> Key: HDFS-13542
> URL: https://issues.apache.org/jira/browse/HDFS-13542
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: windows
> Attachments: HDFS-13542-branch-2.000.patch
>
>
> branch-2.9 has failure message on Windows:
> {code:java}
> 2018-05-09 16:26:03,014 [Thread-3533] ERROR hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:initMiniDFSCluster(884)) - IOE creating namenodes. 
> Permissions dump:
> path 
> 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data': 
>  
> absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project
>  permissions: drwx
> path 'E:\OSSHadoop': 
>  absolute:E:\OSSHadoop
>  permissions: drwx
> path 'E:\': 
>  absolute:E:\
>  permissions: drwxjava.io.IOException: Could not fully delete 
> E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1026)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:982)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:879)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:515)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:474)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testNeededReplicationWhileAppending(TestBlockManager.java:465){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469702#comment-16469702
 ] 

Íñigo Goiri commented on HDFS-13537:


Thanks [~surmountian], for the report.
Yetus is also clean for both patches.
+1
I'm committing this all the way to branch-2.9.

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13537-branch-2.000.patch, HDFS-13537.000.patch, 
> HDFS-13537.001.patch, image-2018-05-09-16-29-50-976.png, 
> image-2018-05-09-16-31-40-981.png
>
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13542) TestBlockManager#testNeededReplicationWhileAppending fails due to improper cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows

2018-05-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HDFS-13542:
--

Assignee: Anbang Hu

> TestBlockManager#testNeededReplicationWhileAppending fails due to improper 
> cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows
> 
>
> Key: HDFS-13542
> URL: https://issues.apache.org/jira/browse/HDFS-13542
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: windows
>
> branch-2.9 has failure message on Windows:
> {code:java}
> 2018-05-09 16:26:03,014 [Thread-3533] ERROR hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:initMiniDFSCluster(884)) - IOE creating namenodes. 
> Permissions dump:
> path 
> 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data': 
>  
> absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project
>  permissions: drwx
> path 'E:\OSSHadoop': 
>  absolute:E:\OSSHadoop
>  permissions: drwx
> path 'E:\': 
>  absolute:E:\
>  permissions: drwxjava.io.IOException: Could not fully delete 
> E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1026)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:982)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:879)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:515)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:474)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testNeededReplicationWhileAppending(TestBlockManager.java:465){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13542) TestBlockManager#testNeededReplicationWhileAppending fails due to improper cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows

2018-05-09 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13542:
-
Attachment: (was: HDFS-13542.000.patch)

> TestBlockManager#testNeededReplicationWhileAppending fails due to improper 
> cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows
> 
>
> Key: HDFS-13542
> URL: https://issues.apache.org/jira/browse/HDFS-13542
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Priority: Minor
>  Labels: windows
>
> branch-2.9 has failure message on Windows:
> {code:java}
> 2018-05-09 16:26:03,014 [Thread-3533] ERROR hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:initMiniDFSCluster(884)) - IOE creating namenodes. 
> Permissions dump:
> path 
> 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data': 
>  
> absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project
>  permissions: drwx
> path 'E:\OSSHadoop': 
>  absolute:E:\OSSHadoop
>  permissions: drwx
> path 'E:\': 
>  absolute:E:\
>  permissions: drwxjava.io.IOException: Could not fully delete 
> E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1026)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:982)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:879)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:515)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:474)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testNeededReplicationWhileAppending(TestBlockManager.java:465){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13542) TestBlockManager#testNeededReplicationWhileAppending fails due to improper cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows

2018-05-09 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13542:
-
  Attachment: HDFS-13542.000.patch
Target Version/s: 2.9.2  (was: 3.2.0, 2.9.2)
  Status: Patch Available  (was: Open)

> TestBlockManager#testNeededReplicationWhileAppending fails due to improper 
> cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows
> 
>
> Key: HDFS-13542
> URL: https://issues.apache.org/jira/browse/HDFS-13542
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Priority: Minor
>  Labels: windows
>
> branch-2.9 has failure message on Windows:
> {code:java}
> 2018-05-09 16:26:03,014 [Thread-3533] ERROR hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:initMiniDFSCluster(884)) - IOE creating namenodes. 
> Permissions dump:
> path 
> 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data': 
>  
> absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project
>  permissions: drwx
> path 'E:\OSSHadoop': 
>  absolute:E:\OSSHadoop
>  permissions: drwx
> path 'E:\': 
>  absolute:E:\
>  permissions: drwxjava.io.IOException: Could not fully delete 
> E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1026)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:982)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:879)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:515)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:474)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testNeededReplicationWhileAppending(TestBlockManager.java:465){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13542) TestBlockManager#testNeededReplicationWhileAppending fails due to improper cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows

2018-05-09 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HDFS-13542:
-
Labels: windows  (was: )

> TestBlockManager#testNeededReplicationWhileAppending fails due to improper 
> cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows
> 
>
> Key: HDFS-13542
> URL: https://issues.apache.org/jira/browse/HDFS-13542
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Anbang Hu
>Priority: Minor
>  Labels: windows
>
> branch-2.9 has failure message on Windows:
> {code:java}
> 2018-05-09 16:26:03,014 [Thread-3533] ERROR hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:initMiniDFSCluster(884)) - IOE creating namenodes. 
> Permissions dump:
> path 
> 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data': 
>  
> absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs
>  permissions: drwx
> path 'E:\OSSHadoop\hadoop-hdfs-project': 
>  absolute:E:\OSSHadoop\hadoop-hdfs-project
>  permissions: drwx
> path 'E:\OSSHadoop': 
>  absolute:E:\OSSHadoop
>  permissions: drwx
> path 'E:\': 
>  absolute:E:\
>  permissions: drwxjava.io.IOException: Could not fully delete 
> E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name-0-1
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1026)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:982)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:879)
>  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:515)
>  at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:474)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testNeededReplicationWhileAppending(TestBlockManager.java:465){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-09 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13537:
--
Attachment: image-2018-05-09-16-31-40-981.png

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13537-branch-2.000.patch, HDFS-13537.000.patch, 
> HDFS-13537.001.patch, image-2018-05-09-16-29-50-976.png, 
> image-2018-05-09-16-31-40-981.png
>
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-09 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469685#comment-16469685
 ] 

Xiao Liang commented on HDFS-13537:
---

Sure, here's before:

!image-2018-05-09-16-29-50-976.png!

And this is after the patch:

!image-2018-05-09-16-31-40-981.png!

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13537-branch-2.000.patch, HDFS-13537.000.patch, 
> HDFS-13537.001.patch, image-2018-05-09-16-29-50-976.png, 
> image-2018-05-09-16-31-40-981.png
>
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13537) TestHdfsHelper does not generate jceks path properly for relative path in Windows

2018-05-09 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HDFS-13537:
--
Attachment: image-2018-05-09-16-29-50-976.png

> TestHdfsHelper does not generate jceks path properly for relative path in 
> Windows
> -
>
> Key: HDFS-13537
> URL: https://issues.apache.org/jira/browse/HDFS-13537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HDFS-13537-branch-2.000.patch, HDFS-13537.000.patch, 
> HDFS-13537.001.patch, image-2018-05-09-16-29-50-976.png
>
>
> In TestHdfsHelper#startMiniHdfs, jceks path is generated as:
> {code:java}
> final String jceksPath = JavaKeyStoreProvider.SCHEME_NAME + "://file" +
> new Path(helper.getTestRootDir(), "test.jks").toUri();{code}
> while the path from getTestRootDir() is a relative path (in windows), the 
> result will be incorrect due to no "/" between "://file" and the relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13542) TestBlockManager#testNeededReplicationWhileAppending fails due to improper cluster shutdown in TestBlockManager#testBlockManagerMachinesArray on Windows

2018-05-09 Thread Anbang Hu (JIRA)
Anbang Hu created HDFS-13542:


 Summary: TestBlockManager#testNeededReplicationWhileAppending 
fails due to improper cluster shutdown in 
TestBlockManager#testBlockManagerMachinesArray on Windows
 Key: HDFS-13542
 URL: https://issues.apache.org/jira/browse/HDFS-13542
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Anbang Hu


branch-2.9 has failure message on Windows:
{code:java}
2018-05-09 16:26:03,014 [Thread-3533] ERROR hdfs.MiniDFSCluster 
(MiniDFSCluster.java:initMiniDFSCluster(884)) - IOE creating namenodes. 
Permissions dump:
path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data': 
 absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data
 permissions: drwx
path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs': 
 absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs
 permissions: drwx
path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data': 
 absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data
 permissions: drwx
path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test': 
 absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test
 permissions: drwx
path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target': 
 absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target
 permissions: drwx
path 'E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs': 
 absolute:E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs
 permissions: drwx
path 'E:\OSSHadoop\hadoop-hdfs-project': 
 absolute:E:\OSSHadoop\hadoop-hdfs-project
 permissions: drwx
path 'E:\OSSHadoop': 
 absolute:E:\OSSHadoop
 permissions: drwx
path 'E:\': 
 absolute:E:\
 permissions: drwxjava.io.IOException: Could not fully delete 
E:\OSSHadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name-0-1
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1026)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:982)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:879)
 at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:515)
 at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:474)
 at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testNeededReplicationWhileAppending(TestBlockManager.java:465){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13530) NameNode: Fix NullPointerException when getQuotaUsageInt() invoked

2018-05-09 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469654#comment-16469654
 ] 

Xiaoyu Yao commented on HDFS-13530:
---

Thanks [~liuhongtong] for working on this. The fix LGTM. Agree with [~ajayydv] 
that we should add a unit test for this. 

> NameNode: Fix NullPointerException when getQuotaUsageInt() invoked
> --
>
> Key: HDFS-13530
> URL: https://issues.apache.org/jira/browse/HDFS-13530
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, namenode
>Reporter: liuhongtong
>Priority: Major
> Attachments: HDFS-13530.001.patch
>
>
> If the directory is nonexistent, getQuotaUsage rpc call will run into 
> NullPointerException throwed by
> FSDirStatAndListingOp.getQuotaUsageInt() .
> I think FSDirStatAndListingOp.getQuotaUsageInt() should throw 
> FileNotFoundException when the directory is nonexistent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-5) Enable OzoneManager kerberos auth

2018-05-09 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-5?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-5:
--
Status: Open  (was: Patch Available)

> Enable OzoneManager kerberos auth
> -
>
> Key: HDDS-5
> URL: https://issues.apache.org/jira/browse/HDDS-5
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-5-HDDS-4.00.patch, initial-patch.patch
>
>
> enable KSM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-5) Enable OzoneManager kerberos auth

2018-05-09 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-5?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-5:
--
Status: Patch Available  (was: Open)

> Enable OzoneManager kerberos auth
> -
>
> Key: HDDS-5
> URL: https://issues.apache.org/jira/browse/HDDS-5
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-5-HDDS-4.00.patch, initial-patch.patch
>
>
> enable KSM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-6) Enable SCM kerberos auth

2018-05-09 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-6:
--
   Resolution: Fixed
Fix Version/s: (was: 0.3.0)
   HDDS-4
   Status: Resolved  (was: Patch Available)

Thanks [~ajayydv] for the contribution. I've committed the patch to the feature 
branch. 

> Enable SCM kerberos auth
> 
>
> Key: HDDS-6
> URL: https://issues.apache.org/jira/browse/HDDS-6
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: HDDS-4
>
> Attachments: HDDS-4-HDDS-6.00.patch, HDDS-6-HDDS-4.01.patch, 
> HDDS-6-HDDS-4.02.patch
>
>
> Enable SCM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13539) DFSInputStream NPE when reportCheckSumFailure

2018-05-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469631#comment-16469631
 ] 

genericqa commented on HDFS-13539:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
37s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}207m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.TestReencryption |
|   | hadoop.fs.viewfs.TestViewFileSystemLinkFallback |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13539 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922707/HDFS-13539.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 05b35534696c 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 G

[jira] [Updated] (HDFS-13136) Avoid taking FSN lock while doing group member lookup for FSD permission check

2018-05-09 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-13136:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.3
   3.2.0
   3.1.0
   Status: Resolved  (was: Patch Available)

> Avoid taking FSN lock while doing group member lookup for FSD permission check
> --
>
> Key: HDFS-13136
> URL: https://issues.apache.org/jira/browse/HDFS-13136
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 3.1.0, 3.2.0, 3.0.3
>
> Attachments: HDFS-13136-branch-3.0.001.patch, 
> HDFS-13136-branch-3.0.002.patch, HDFS-13136.001.patch, HDFS-13136.002.patch
>
>
> Namenode has FSN lock and FSD lock. Most of the namenode operations need to 
> take FSN lock first and then FSD lock.  The permission check is done via 
> FSPermissionChecker at FSD layer assuming FSN lock is taken. 
> The FSPermissionChecker constructor invokes callerUgi.getGroups() that can 
> take seconds sometimes. There are external cache scheme such SSSD and 
> internal cache scheme for group lookup. However, the delay could still occur 
> during cache refresh, which causes severe FSN lock contentions and 
> unresponsive namenode issues.
> Checking the current code, we found that getBlockLocations(..) did it right 
> but some methods such as getFileInfo(..), getContentSummary(..) did it wrong. 
> This ticket is open to ensure the group lookup for permission checker is 
> outside the FSN lock.  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13136) Avoid taking FSN lock while doing group member lookup for FSD permission check

2018-05-09 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469630#comment-16469630
 ] 

Xiaoyu Yao commented on HDFS-13136:
---

Sorry, I'm wrong. This is in branch-3.0. I will resolve the ticket. Thanks 
[~yzhangal].

> Avoid taking FSN lock while doing group member lookup for FSD permission check
> --
>
> Key: HDFS-13136
> URL: https://issues.apache.org/jira/browse/HDFS-13136
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDFS-13136-branch-3.0.001.patch, 
> HDFS-13136-branch-3.0.002.patch, HDFS-13136.001.patch, HDFS-13136.002.patch
>
>
> Namenode has FSN lock and FSD lock. Most of the namenode operations need to 
> take FSN lock first and then FSD lock.  The permission check is done via 
> FSPermissionChecker at FSD layer assuming FSN lock is taken. 
> The FSPermissionChecker constructor invokes callerUgi.getGroups() that can 
> take seconds sometimes. There are external cache scheme such SSSD and 
> internal cache scheme for group lookup. However, the delay could still occur 
> during cache refresh, which causes severe FSN lock contentions and 
> unresponsive namenode issues.
> Checking the current code, we found that getBlockLocations(..) did it right 
> but some methods such as getFileInfo(..), getContentSummary(..) did it wrong. 
> This ticket is open to ensure the group lookup for permission checker is 
> outside the FSN lock.  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-6) Enable SCM kerberos auth

2018-05-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469609#comment-16469609
 ] 

genericqa commented on HDDS-6:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
34s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
10s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
57s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m  
7s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
0s{color} | {color:red} hadoop-hdds/common in HDDS-4 has 1 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
43s{color} | {color:red} hadoop-hdds/server-scm in HDDS-4 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m  
7s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
29s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 43s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
17s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 55s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
21s{color} | {color:green} common in the patch passed. 

[jira] [Commented] (HDDS-19) Ozone: Update ozone to latest ratis snapshot build (0.1.1-alpha-4309324-SNAPSHOT)

2018-05-09 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-19?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469596#comment-16469596
 ] 

Tsz Wo Nicholas Sze commented on HDDS-19:
-

Let's wait for RATIS-237 and then update to the latest snapshot.  We should 
also remove the jctools dependency from hadoop-project/pom.xml

{code}
@@ -875,12 +875,6 @@
 
   
 
-  
-org.jctools
-jctools-core
-1.2.1
-  
-
{code}


> Ozone: Update ozone to latest ratis snapshot build 
> (0.1.1-alpha-4309324-SNAPSHOT)
> -
>
> Key: HDDS-19
> URL: https://issues.apache.org/jira/browse/HDDS-19
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-19.001.patch, HDFS-13456-HDFS-7240.001.patch, 
> HDFS-13456-HDFS-7240.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13540) DFSStripedInputStream should not allocate new buffers during close / unbuffer

2018-05-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469584#comment-16469584
 ] 

genericqa commented on HDFS-13540:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 58s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
38s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}186m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestReencryption |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13540 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922686/HDFS-13540.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e6bd012c19ee 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / af4fc2e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC

[jira] [Comment Edited] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-05-09 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469299#comment-16469299
 ] 

Yongjun Zhang edited comment on HDFS-13388 at 5/9/18 10:10 PM:
---

Hi [~elgoiri],

This Jira was released in 3.0.2 then get reverted from branch-3.0. Seems 
reasonable to get it to branch-3.0, which is now targeting for 3.0.3. Would you 
please do so?

Thanks.


was (Author: yzhangal):
Hi [~elgoiri],

This Jira was released in 3.0.2 then get reverted from branch-3.0. Seems 
reasonable to get it to branch-3.0, which is now targeting for 3.0.3. Would you 
please do so?

Thanks.

 

 

 

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.3
>
> Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, 
> HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, 
> HADOOP-13388.0006.patch, HADOOP-13388.0007.patch, HADOOP-13388.0008.patch, 
> HADOOP-13388.0009.patch, HADOOP-13388.0010.patch, HADOOP-13388.0011.patch, 
> HADOOP-13388.0012.patch, HADOOP-13388.0013.patch, HADOOP-13388.0014.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-05-09 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469583#comment-16469583
 ] 

Yongjun Zhang commented on HDFS-13388:
--

Welcome, and thanks for taking care of that, [~elgoiri]. I saw it in branch-3.0 
now. 

 

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.3
>
> Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, 
> HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, 
> HADOOP-13388.0006.patch, HADOOP-13388.0007.patch, HADOOP-13388.0008.patch, 
> HADOOP-13388.0009.patch, HADOOP-13388.0010.patch, HADOOP-13388.0011.patch, 
> HADOOP-13388.0012.patch, HADOOP-13388.0013.patch, HADOOP-13388.0014.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-10) docker changes to test secure ozone cluster

2018-05-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-10?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469575#comment-16469575
 ] 

genericqa commented on HDDS-10:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
15s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
16s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-10 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922713/HDDS-10-HDDS-4.00.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  shellcheck  shelldocs  |
| uname | Linux f1be04545d22 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-4 / af4fc2e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| shellcheck | v0.4.6 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/61/testReport/ |
| Max. process+thread count | 410 (vs. ulimit of 1) |
| modules | C: hadoop-dist U: hadoop-dist |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/61/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> docker changes to test secure ozone cluster
> ---
>
> Key: HDDS-10
> URL: https://issues.apache.org/jira/browse/HDDS-10
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security

[jira] [Commented] (HDFS-13430) Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445

2018-05-09 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469573#comment-16469573
 ] 

Yongjun Zhang commented on HDFS-13430:
--

Thanks [~shahrs87].

 

> Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445
> --
>
> Key: HDFS-13430
> URL: https://issues.apache.org/jira/browse/HDFS-13430
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13430.01.patch
>
>
> Unfortunately HADOOP-14445 had an HDFS test failure that's not caught in the 
> hadoop-common precommit runs.
> This is caught by our internal pre-commit using dist-test, and appears to be 
> the only failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-5) Enable OzoneManager kerberos auth

2018-05-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-5?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469552#comment-16469552
 ] 

genericqa commented on HDDS-5:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDDS-5 does not apply to HDDS-4. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-5 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922732/HDDS-5-HDDS-4.00.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/62/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Enable OzoneManager kerberos auth
> -
>
> Key: HDDS-5
> URL: https://issues.apache.org/jira/browse/HDDS-5
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-5-HDDS-4.00.patch, initial-patch.patch
>
>
> enable KSM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-7) Enable kerberos auth for Ozone client in hadoop rpc

2018-05-09 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-7?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-7:
--
Fix Version/s: (was: 0.3.0)

> Enable kerberos auth for Ozone client in hadoop rpc 
> 
>
> Key: HDDS-7
> URL: https://issues.apache.org/jira/browse/HDDS-7
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Client, SCM Client
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-4-HDDS-7-poc.patch
>
>
> Enable kerberos auth for Ozone client in hadoop rpc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-5) Enable OzoneManager kerberos auth

2018-05-09 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-5?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-5:
--
Status: Patch Available  (was: Open)

> Enable OzoneManager kerberos auth
> -
>
> Key: HDDS-5
> URL: https://issues.apache.org/jira/browse/HDDS-5
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-5-HDDS-4.00.patch, initial-patch.patch
>
>
> enable KSM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-5) Enable OzoneManager kerberos auth

2018-05-09 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-5?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-5:
--
Attachment: HDDS-5-HDDS-4.00.patch

> Enable OzoneManager kerberos auth
> -
>
> Key: HDDS-5
> URL: https://issues.apache.org/jira/browse/HDDS-5
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-5-HDDS-4.00.patch, initial-patch.patch
>
>
> enable KSM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13434) RBF: Fix dead links in RBF document

2018-05-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469489#comment-16469489
 ] 

Íñigo Goiri commented on HDFS-13434:


I had forgotten... sorry.
Updated the fix version.

> RBF: Fix dead links in RBF document
> ---
>
> Key: HDFS-13434
> URL: https://issues.apache.org/jira/browse/HDFS-13434
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Chetna Chaudhari
>Priority: Major
>  Labels: newbie
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13434.patch
>
>
> There are many dead links in 
> [http://hadoop.apache.org/docs/r3.1.0/hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html.]
>  Let's fix them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13434) RBF: Fix dead links in RBF document

2018-05-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13434:
---
Fix Version/s: 3.0.3
   2.9.2
   3.1.1
   3.2.0
   2.10.0

> RBF: Fix dead links in RBF document
> ---
>
> Key: HDFS-13434
> URL: https://issues.apache.org/jira/browse/HDFS-13434
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Chetna Chaudhari
>Priority: Major
>  Labels: newbie
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13434.patch
>
>
> There are many dead links in 
> [http://hadoop.apache.org/docs/r3.1.0/hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html.]
>  Let's fix them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-05-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469488#comment-16469488
 ] 

Íñigo Goiri commented on HDFS-13388:


bq. This Jira was released in 3.0.2 then get reverted from branch-3.0. Seems 
reasonable to get it to branch-3.0, which is now targeting for 3.0.3. Would you 
please do so?

Yep, we had some issues in the process.
I committed to branch-3.1 and branch-3.0 and fixed the fix version.
Let me know and thanks for taking the pain of going through all the patches

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.3
>
> Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, 
> HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, 
> HADOOP-13388.0006.patch, HADOOP-13388.0007.patch, HADOOP-13388.0008.patch, 
> HADOOP-13388.0009.patch, HADOOP-13388.0010.patch, HADOOP-13388.0011.patch, 
> HADOOP-13388.0012.patch, HADOOP-13388.0013.patch, HADOOP-13388.0014.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-05-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13388:
---
Fix Version/s: 3.0.3
   3.1.1

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.3
>
> Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, 
> HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, 
> HADOOP-13388.0006.patch, HADOOP-13388.0007.patch, HADOOP-13388.0008.patch, 
> HADOOP-13388.0009.patch, HADOOP-13388.0010.patch, HADOOP-13388.0011.patch, 
> HADOOP-13388.0012.patch, HADOOP-13388.0013.patch, HADOOP-13388.0014.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13434) RBF: Fix dead links in RBF document

2018-05-09 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469463#comment-16469463
 ] 

Yongjun Zhang commented on HDFS-13434:
--

Hi [~elgoiri],

Thanks for working on this issue, would you please update the fix versions 
accordingly?

 

> RBF: Fix dead links in RBF document
> ---
>
> Key: HDFS-13434
> URL: https://issues.apache.org/jira/browse/HDFS-13434
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Chetna Chaudhari
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-13434.patch
>
>
> There are many dead links in 
> [http://hadoop.apache.org/docs/r3.1.0/hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html.]
>  Let's fix them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13430) Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445

2018-05-09 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469462#comment-16469462
 ] 

Rushabh S Shah commented on HDFS-13430:
---

bq.  Are all of them reverted from ALL branches?
Yes

> Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445
> --
>
> Key: HDFS-13430
> URL: https://issues.apache.org/jira/browse/HDFS-13430
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13430.01.patch
>
>
> Unfortunately HADOOP-14445 had an HDFS test failure that's not caught in the 
> hadoop-common precommit runs.
> This is caught by our internal pre-commit using dist-test, and appears to be 
> the only failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13430) Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445

2018-05-09 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469457#comment-16469457
 ] 

Yongjun Zhang commented on HDFS-13430:
--

Hi [~xiaochen] and [~shahrs87],

Thank you guys for working on HADOOP-14445 and this one here. Are all of them 
reverted from ALL branches?

 

 

> Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445
> --
>
> Key: HDFS-13430
> URL: https://issues.apache.org/jira/browse/HDFS-13430
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13430.01.patch
>
>
> Unfortunately HADOOP-14445 had an HDFS test failure that's not caught in the 
> hadoop-common precommit runs.
> This is caught by our internal pre-commit using dist-test, and appears to be 
> the only failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-10) docker changes to test secure ozone cluster

2018-05-09 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-10?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-10:
---
Status: Patch Available  (was: Open)

> docker changes to test secure ozone cluster
> ---
>
> Key: HDDS-10
> URL: https://issues.apache.org/jira/browse/HDDS-10
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-10-HDDS-4.00.patch
>
>
> Update docker compose and settings to test secure ozone cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-10) docker changes to test secure ozone cluster

2018-05-09 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-10?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-10:
---
Fix Version/s: (was: 0.3.0)

> docker changes to test secure ozone cluster
> ---
>
> Key: HDDS-10
> URL: https://issues.apache.org/jira/browse/HDDS-10
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-10-HDDS-4.00.patch
>
>
> Update docker compose and settings to test secure ozone cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-10) docker changes to test secure ozone cluster

2018-05-09 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-10?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469399#comment-16469399
 ] 

Ajay Kumar commented on HDDS-10:


ping [~xyao], [~elek] for initial review. May be we can copy images to official 
apache or ozone docker repo.

> docker changes to test secure ozone cluster
> ---
>
> Key: HDDS-10
> URL: https://issues.apache.org/jira/browse/HDDS-10
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-10-HDDS-4.00.patch
>
>
> Update docker compose and settings to test secure ozone cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-10) docker changes to test secure ozone cluster

2018-05-09 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-10?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-10:
---
Attachment: HDDS-10-HDDS-4.00.patch

> docker changes to test secure ozone cluster
> ---
>
> Key: HDDS-10
> URL: https://issues.apache.org/jira/browse/HDDS-10
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-10-HDDS-4.00.patch
>
>
> Update docker compose and settings to test secure ozone cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-05-09 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-13448:
---
Status: Patch Available  (was: Open)

Added unit test

> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 3.0.1, 2.9.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13448.1.patch, HDFS-13448.2.patch, 
> HDFS-13448.3.patch, HDFS-13448.4.patch, HDFS-13448.5.patch, HDFS-13448.6.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-05-09 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-13448:
---
Status: Open  (was: Patch Available)

> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 3.0.1, 2.9.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13448.1.patch, HDFS-13448.2.patch, 
> HDFS-13448.3.patch, HDFS-13448.4.patch, HDFS-13448.5.patch, HDFS-13448.6.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-05-09 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-13448:
---
Attachment: HDFS-13448.6.patch

> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 2.9.0, 3.0.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13448.1.patch, HDFS-13448.2.patch, 
> HDFS-13448.3.patch, HDFS-13448.4.patch, HDFS-13448.5.patch, HDFS-13448.6.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-6) Enable SCM kerberos auth

2018-05-09 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469370#comment-16469370
 ] 

Xiaoyu Yao commented on HDDS-6:
---

Thanks [~ajayydv] for the update. v2 patch looks good to me. +1 pending Jenkins.

> Enable SCM kerberos auth
> 
>
> Key: HDDS-6
> URL: https://issues.apache.org/jira/browse/HDDS-6
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-4-HDDS-6.00.patch, HDDS-6-HDDS-4.01.patch, 
> HDDS-6-HDDS-4.02.patch
>
>
> Enable SCM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13534) libhdfs++: Fix GCC7 build

2018-05-09 Thread Istvan Fajth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469367#comment-16469367
 ] 

Istvan Fajth commented on HDFS-13534:
-

Thank you guys for finding the cause and the work done on this, I can confirm 
that the build runs successfully for me on my Mac env. as well.

> libhdfs++: Fix GCC7 build
> -
>
> Key: HDFS-13534
> URL: https://issues.apache.org/jira/browse/HDFS-13534
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Major
> Attachments: HDFS-13534.000.patch, HDFS-13534.001.patch
>
>
> After merging HDFS-13403 [~pifta] noticed the build broke on some platforms.  
> [~bibinchundatt] pointed out that prior to gcc 7 mutex, future, and regex 
> implicitly included functional.  Without that implicit include the compiler 
> errors on the std::function in ioservice.h.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13136) Avoid taking FSN lock while doing group member lookup for FSD permission check

2018-05-09 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469357#comment-16469357
 ] 

Xiaoyu Yao commented on HDFS-13136:
---

[~yzhangal], thanks for the heads up. This has been committed to trunk and 
branch-3.1.

 
{quote}I saw it's in branch-3.0 which will target for 3.0.3.
{quote}
Branch-3.0 patch has not been committed yet. I will need to rebase the patch 
and get a new Jenkins run before commit/resolve it. 

> Avoid taking FSN lock while doing group member lookup for FSD permission check
> --
>
> Key: HDFS-13136
> URL: https://issues.apache.org/jira/browse/HDFS-13136
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDFS-13136-branch-3.0.001.patch, 
> HDFS-13136-branch-3.0.002.patch, HDFS-13136.001.patch, HDFS-13136.002.patch
>
>
> Namenode has FSN lock and FSD lock. Most of the namenode operations need to 
> take FSN lock first and then FSD lock.  The permission check is done via 
> FSPermissionChecker at FSD layer assuming FSN lock is taken. 
> The FSPermissionChecker constructor invokes callerUgi.getGroups() that can 
> take seconds sometimes. There are external cache scheme such SSSD and 
> internal cache scheme for group lookup. However, the delay could still occur 
> during cache refresh, which causes severe FSN lock contentions and 
> unresponsive namenode issues.
> Checking the current code, we found that getBlockLocations(..) did it right 
> but some methods such as getFileInfo(..), getContentSummary(..) did it wrong. 
> This ticket is open to ensure the group lookup for permission checker is 
> outside the FSN lock.  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13539) DFSInputStream NPE when reportCheckSumFailure

2018-05-09 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13539:
-
Attachment: HDFS-13539.02.patch

> DFSInputStream NPE when reportCheckSumFailure
> -
>
> Key: HDFS-13539
> URL: https://issues.apache.org/jira/browse/HDFS-13539
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13539.01.patch, HDFS-13539.02.patch
>
>
> We have seem the following exception with DFSStripedInputStream.
> {noformat}
> readDirect: FSDataInputStream#read error:
> NullPointerException: java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:402)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:147)
> {noformat}
> Line 402 is {{reportCheckSumFailure}}, and {{currentLocatedBlock}} is the 
> only possible null object. (Because {{currentLocatedBlock.getLocations()}} 
> cannot be null - {{LocatedBlock}} constructor checks {{locs}} and would 
> assign a {{EMPTY_LOCS}} if it's null)
> Original exception is masked by the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13539) DFSInputStream NPE when reportCheckSumFailure

2018-05-09 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469343#comment-16469343
 ] 

Xiao Chen commented on HDFS-13539:
--

Thanks for the review Eddy. Comments addressed in [^HDFS-13539.02.patch] 
Added synchronization are for readability, so other people don't have to trace 
up and see if the caller is synchronized. But agree without it this patch would 
look cleaner.

> DFSInputStream NPE when reportCheckSumFailure
> -
>
> Key: HDFS-13539
> URL: https://issues.apache.org/jira/browse/HDFS-13539
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13539.01.patch, HDFS-13539.02.patch
>
>
> We have seem the following exception with DFSStripedInputStream.
> {noformat}
> readDirect: FSDataInputStream#read error:
> NullPointerException: java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:402)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:147)
> {noformat}
> Line 402 is {{reportCheckSumFailure}}, and {{currentLocatedBlock}} is the 
> only possible null object. (Because {{currentLocatedBlock.getLocations()}} 
> cannot be null - {{LocatedBlock}} constructor checks {{locs}} and would 
> assign a {{EMPTY_LOCS}} if it's null)
> Original exception is masked by the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-6) Enable SCM kerberos auth

2018-05-09 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469322#comment-16469322
 ] 

Ajay Kumar commented on HDDS-6:
---

[~xyao], Patch v2 to address your comments, shellcheck and failed test case in 
{{TestOzoneConfigurationFields}}. (Other test failures are unrelated)

> Enable SCM kerberos auth
> 
>
> Key: HDDS-6
> URL: https://issues.apache.org/jira/browse/HDDS-6
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-4-HDDS-6.00.patch, HDDS-6-HDDS-4.01.patch, 
> HDDS-6-HDDS-4.02.patch
>
>
> Enable SCM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13136) Avoid taking FSN lock while doing group member lookup for FSD permission check

2018-05-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469317#comment-16469317
 ] 

genericqa commented on HDFS-13136:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDFS-13136 does not apply to branch-3.0. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13136 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911632/HDFS-13136-branch-3.0.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24163/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Avoid taking FSN lock while doing group member lookup for FSD permission check
> --
>
> Key: HDFS-13136
> URL: https://issues.apache.org/jira/browse/HDFS-13136
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDFS-13136-branch-3.0.001.patch, 
> HDFS-13136-branch-3.0.002.patch, HDFS-13136.001.patch, HDFS-13136.002.patch
>
>
> Namenode has FSN lock and FSD lock. Most of the namenode operations need to 
> take FSN lock first and then FSD lock.  The permission check is done via 
> FSPermissionChecker at FSD layer assuming FSN lock is taken. 
> The FSPermissionChecker constructor invokes callerUgi.getGroups() that can 
> take seconds sometimes. There are external cache scheme such SSSD and 
> internal cache scheme for group lookup. However, the delay could still occur 
> during cache refresh, which causes severe FSN lock contentions and 
> unresponsive namenode issues.
> Checking the current code, we found that getBlockLocations(..) did it right 
> but some methods such as getFileInfo(..), getContentSummary(..) did it wrong. 
> This ticket is open to ensure the group lookup for permission checker is 
> outside the FSN lock.  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-6) Enable SCM kerberos auth

2018-05-09 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-6:
--
Attachment: HDDS-6-HDDS-4.02.patch

> Enable SCM kerberos auth
> 
>
> Key: HDDS-6
> URL: https://issues.apache.org/jira/browse/HDDS-6
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-4-HDDS-6.00.patch, HDDS-6-HDDS-4.01.patch, 
> HDDS-6-HDDS-4.02.patch
>
>
> Enable SCM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-6) Enable SCM kerberos auth

2018-05-09 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-6:
--
Attachment: (was: HDDS-6-HDDS-4.02.patch)

> Enable SCM kerberos auth
> 
>
> Key: HDDS-6
> URL: https://issues.apache.org/jira/browse/HDDS-6
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-4-HDDS-6.00.patch, HDDS-6-HDDS-4.01.patch
>
>
> Enable SCM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-6) Enable SCM kerberos auth

2018-05-09 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-6:
--
Attachment: HDDS-6-HDDS-4.02.patch

> Enable SCM kerberos auth
> 
>
> Key: HDDS-6
> URL: https://issues.apache.org/jira/browse/HDDS-6
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-4-HDDS-6.00.patch, HDDS-6-HDDS-4.01.patch, 
> HDDS-6-HDDS-4.02.patch
>
>
> Enable SCM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-05-09 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469299#comment-16469299
 ] 

Yongjun Zhang commented on HDFS-13388:
--

Hi [~elgoiri],

This Jira was released in 3.0.2 then get reverted from branch-3.0. Seems 
reasonable to get it to branch-3.0, which is now targeting for 3.0.3. Would you 
please do so?

Thanks.

 

 

 

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, 
> HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, 
> HADOOP-13388.0006.patch, HADOOP-13388.0007.patch, HADOOP-13388.0008.patch, 
> HADOOP-13388.0009.patch, HADOOP-13388.0010.patch, HADOOP-13388.0011.patch, 
> HADOOP-13388.0012.patch, HADOOP-13388.0013.patch, HADOOP-13388.0014.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13136) Avoid taking FSN lock while doing group member lookup for FSD permission check

2018-05-09 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469290#comment-16469290
 ] 

Yongjun Zhang edited comment on HDFS-13136 at 5/9/18 6:50 PM:
--

HI [~xyao],

Thanks for your work here, could it be Resolved since it's committed?

I saw it's in branch-3.0 which will target for 3.0.3.

Thanks.


was (Author: yzhangal):
HI [~xyao],

Thanks for your work here, could it be Resolved since it's committed?

 

> Avoid taking FSN lock while doing group member lookup for FSD permission check
> --
>
> Key: HDFS-13136
> URL: https://issues.apache.org/jira/browse/HDFS-13136
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDFS-13136-branch-3.0.001.patch, 
> HDFS-13136-branch-3.0.002.patch, HDFS-13136.001.patch, HDFS-13136.002.patch
>
>
> Namenode has FSN lock and FSD lock. Most of the namenode operations need to 
> take FSN lock first and then FSD lock.  The permission check is done via 
> FSPermissionChecker at FSD layer assuming FSN lock is taken. 
> The FSPermissionChecker constructor invokes callerUgi.getGroups() that can 
> take seconds sometimes. There are external cache scheme such SSSD and 
> internal cache scheme for group lookup. However, the delay could still occur 
> during cache refresh, which causes severe FSN lock contentions and 
> unresponsive namenode issues.
> Checking the current code, we found that getBlockLocations(..) did it right 
> but some methods such as getFileInfo(..), getContentSummary(..) did it wrong. 
> This ticket is open to ensure the group lookup for permission checker is 
> outside the FSN lock.  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13136) Avoid taking FSN lock while doing group member lookup for FSD permission check

2018-05-09 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469290#comment-16469290
 ] 

Yongjun Zhang commented on HDFS-13136:
--

HI [~xyao],

Thanks for your work here, could it be Resolved since it's committed?

 

> Avoid taking FSN lock while doing group member lookup for FSD permission check
> --
>
> Key: HDFS-13136
> URL: https://issues.apache.org/jira/browse/HDFS-13136
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDFS-13136-branch-3.0.001.patch, 
> HDFS-13136-branch-3.0.002.patch, HDFS-13136.001.patch, HDFS-13136.002.patch
>
>
> Namenode has FSN lock and FSD lock. Most of the namenode operations need to 
> take FSN lock first and then FSD lock.  The permission check is done via 
> FSPermissionChecker at FSD layer assuming FSN lock is taken. 
> The FSPermissionChecker constructor invokes callerUgi.getGroups() that can 
> take seconds sometimes. There are external cache scheme such SSSD and 
> internal cache scheme for group lookup. However, the delay could still occur 
> during cache refresh, which causes severe FSN lock contentions and 
> unresponsive namenode issues.
> Checking the current code, we found that getBlockLocations(..) did it right 
> but some methods such as getFileInfo(..), getContentSummary(..) did it wrong. 
> This ticket is open to ensure the group lookup for permission checker is 
> outside the FSN lock.  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13062) Provide support for JN to use separate journal disk per namespace

2018-05-09 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13062:
-
Fix Version/s: (was: 3.0.1)
   3.0.3

> Provide support for JN to use separate journal disk per namespace
> -
>
> Key: HDFS-13062
> URL: https://issues.apache.org/jira/browse/HDFS-13062
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation, journal-node
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-13062.00.patch, HDFS-13062.01.patch, 
> HDFS-13062.02.patch, HDFS-13062.03.patch, HDFS-13062.04.patch, 
> HDFS-13062.05.patch, HDFS-13062.06.patch
>
>
> In Federated HA setup, provide support for separate journal disk for each 
> namespace.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13048) LowRedundancyReplicatedBlocks metric can be negative

2018-05-09 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469281#comment-16469281
 ] 

Yongjun Zhang commented on HDFS-13048:
--

Hi [~ajisakaa],

FYI, I just updated the fix version to 3.0.3 from 3.0.1, since I don't see it 
in 3.0.1. Thanks.

 

> LowRedundancyReplicatedBlocks metric can be negative
> 
>
> Key: HDFS-13048
> URL: https://issues.apache.org/jira/browse/HDFS-13048
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.0
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-13048-sample.patch, HDFS-13048.001.patch, 
> HDFS-13048.002.patch
>
>
> I'm seeing {{LowRedundancyReplicatedBlocks}} become negative. This should be 
> 0 or positive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13048) LowRedundancyReplicatedBlocks metric can be negative

2018-05-09 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13048:
-
Fix Version/s: (was: 3.0.1)
   3.0.3

> LowRedundancyReplicatedBlocks metric can be negative
> 
>
> Key: HDFS-13048
> URL: https://issues.apache.org/jira/browse/HDFS-13048
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.0
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-13048-sample.patch, HDFS-13048.001.patch, 
> HDFS-13048.002.patch
>
>
> I'm seeing {{LowRedundancyReplicatedBlocks}} become negative. This should be 
> 0 or positive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13539) DFSInputStream NPE when reportCheckSumFailure

2018-05-09 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469276#comment-16469276
 ] 

Lei (Eddy) Xu commented on HDFS-13539:
--

Thanks Xiao for the patch.

Some minor comments:

* Could we change the function name {{getDataNodeCount()}} to 
{{getCurrentBlockLocationsLength()}} ? It is private function and we dont need 
synchronized for it.
* Are the {{synchronized}} added in {{DFSStripedInputStream}} relevant to the 
fix? What about we only do the NPE fix in this patch?

The rest LGTM.  +1 pending the fix.

> DFSInputStream NPE when reportCheckSumFailure
> -
>
> Key: HDFS-13539
> URL: https://issues.apache.org/jira/browse/HDFS-13539
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13539.01.patch
>
>
> We have seem the following exception with DFSStripedInputStream.
> {noformat}
> readDirect: FSDataInputStream#read error:
> NullPointerException: java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:402)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:147)
> {noformat}
> Line 402 is {{reportCheckSumFailure}}, and {{currentLocatedBlock}} is the 
> only possible null object. (Because {{currentLocatedBlock.getLocations()}} 
> cannot be null - {{LocatedBlock}} constructor checks {{locs}} and would 
> assign a {{EMPTY_LOCS}} if it's null)
> Original exception is masked by the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13540) DFSStripedInputStream should not allocate new buffers during close / unbuffer

2018-05-09 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13540:
-
Attachment: HDFS-13540.02.patch

> DFSStripedInputStream should not allocate new buffers during close / unbuffer
> -
>
> Key: HDFS-13540
> URL: https://issues.apache.org/jira/browse/HDFS-13540
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13540.01.patch, HDFS-13540.02.patch
>
>
> This was found in the same scenario where HDFS-13539 is caught.
> There are 2 OOM that looks interesting:
> {noformat}
> FSDataInputStream#close error:
> OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct 
> buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:694)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:672)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.close(DFSStripedInputStream.java:181)
> at java.io.FilterInputStream.close(FilterInputStream.java:181)
> {noformat}
> and 
> {noformat}
> org/apache/hadoop/fs/FSDataInputStream#unbuffer failed: error:
> OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct 
> buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:694)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.unbuffer(DFSInputStream.java:1782)
> at 
> org.apache.hadoop.fs.StreamCapabilitiesPolicy.unbuffer(StreamCapabilitiesPolicy.java:48)
> at 
> org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:230)
> {noformat}
> As the stack trace goes, {{resetCurStripeBuffer}} will get buffer from the 
> buffer pool. We could save the cost of doing so if it's just a close or 
> unbuffer call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13540) DFSStripedInputStream should not allocate new buffers during close / unbuffer

2018-05-09 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13540:
-
Description: 
This was found in the same scenario where HDFS-13539 is caught.

There are 2 OOM that looks interesting:
{noformat}
FSDataInputStream#close error:
OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct buffer 
memory
at java.nio.Bits.reserveMemory(Bits.java:694)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
at 
org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
at org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:672)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.close(DFSStripedInputStream.java:181)
at java.io.FilterInputStream.close(FilterInputStream.java:181)
{noformat}
and 
{noformat}
org/apache/hadoop/fs/FSDataInputStream#unbuffer failed: error:
OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct buffer 
memory
at java.nio.Bits.reserveMemory(Bits.java:694)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
at 
org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
at 
org.apache.hadoop.hdfs.DFSInputStream.unbuffer(DFSInputStream.java:1782)
at 
org.apache.hadoop.fs.StreamCapabilitiesPolicy.unbuffer(StreamCapabilitiesPolicy.java:48)
at 
org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:230)
{noformat}

As the stack trace goes, {{resetCurStripeBuffer}} will get buffer from the 
buffer pool. We could save the cost of doing so if it's not for a read (e.g. 
close, unbuffer etc.)

  was:
This was found in the same scenario where HDFS-13539 is caught.

There are 2 OOM that looks interesting:
{noformat}
FSDataInputStream#close error:
OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct buffer 
memory
at java.nio.Bits.reserveMemory(Bits.java:694)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
at 
org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
at org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:672)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.close(DFSStripedInputStream.java:181)
at java.io.FilterInputStream.close(FilterInputStream.java:181)
{noformat}
and 
{noformat}
org/apache/hadoop/fs/FSDataInputStream#unbuffer failed: error:
OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct buffer 
memory
at java.nio.Bits.reserveMemory(Bits.java:694)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
at 
org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
at 
org.apache.hadoop.hdfs.DFSInputStream.unbuffer(DFSInputStream.java:1782)
at 
org.apache.hadoop.fs.StreamCapabilitiesPolicy.unbuffer(StreamCapabilitiesPolicy.java:48)
at 
org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:230)
{noformat}

As the stack trace goes, {{resetCurStripeBuffer}} will get buffer from the 
buffer pool. We could save the cost of doing so if it's just a close or 
unbuffer call.


> DFSStripedInputStream should not allocate new buffers during close / unbuffer
> -
>
> Key: HDFS-13540
> URL: https://issues.apache.org/jira/browse/HDFS-13540
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13540.01.patch, HDFS-13540.02.patch
>
>
> This 

[jira] [Commented] (HDFS-13398) Hdfs recursive listing operation is very slow

2018-05-09 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469244#comment-16469244
 ] 

Mukul Kumar Singh commented on HDFS-13398:
--

Thanks for the updated patch [~ajaysachdev]. The Apache Hadoop trunk can be 
found at "git clone git://git.apache.org/hadoop.git". Please refer to 
https://wiki.apache.org/hadoop/GitAndHadoop for details.

Please rebase the patch over the latest Apache Hadoop trunk obtained from the 
above location.

> Hdfs recursive listing operation is very slow
> -
>
> Key: HDFS-13398
> URL: https://issues.apache.org/jira/browse/HDFS-13398
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.1
> Environment: HCFS file system where HDP 2.6.1 is connected to ECS 
> (Object Store).
>Reporter: Ajay Sachdev
>Assignee: Ajay Sachdev
>Priority: Major
> Fix For: 2.7.1
>
> Attachments: HDFS-13398.001.patch, parallelfsPatch
>
>
> The hdfs dfs -ls -R command is sequential in nature and is very slow for a 
> HCFS system. We have seen around 6 mins for 40K directory/files structure.
> The proposal is to use multithreading approach to speed up recursive list, du 
> and count operations.
> We have tried a ForkJoinPool implementation to improve performance for 
> recursive listing operation.
> [https://github.com/jasoncwik/hadoop-release/tree/parallel-fs-cli]
> commit id : 
> 82387c8cd76c2e2761bd7f651122f83d45ae8876
> Another implementation is to use Java Executor Service to improve performance 
> to run listing operation in multiple threads in parallel. This has 
> significantly reduced the time to 40 secs from 6 mins.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13315) Add a test for the issue reported in HDFS-11481 which is fixed by HDFS-10997.

2018-05-09 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-13315:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add a test for the issue reported in HDFS-11481 which is fixed by HDFS-10997.
> -
>
> Key: HDFS-13315
> URL: https://issues.apache.org/jira/browse/HDFS-13315
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Major
> Fix For: 3.1.1
>
> Attachments: HDFS-13315.001.patch, HDFS-13315.002.patch, 
> TEST-org.apache.hadoop.hdfs.TestEncryptionZones.xml
>
>
> HDFS-11481 reported that hdfs snapshotDiff /.reserved/raw/... fails on 
> snapshottable directories. It turns out that HDFS-10997 fixed the issue as a 
> byproduct. This jira is to add a test for the HDFS-11481 issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-18) Ozone: Ozone Shell should use RestClient and RpcClient

2018-05-09 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-18?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469226#comment-16469226
 ] 

Nanda kumar commented on HDDS-18:
-

Thanks [~ljain] for updating the patch. I ran acceptance test and it works.
{noformat}
==
Acceptance
==
Acceptance.Ozone :: Smoke test to start cluster with docker-compose environ...
==
Daemons are running without error | PASS |
--
Check if datanode is connected to the scm | PASS |
--
Scale it up to 5 datanodes| PASS |
--
Test rest interface   | PASS |
--
Test ozone cli| PASS |
--
Check webui static resources  | PASS |
--
Start freon testing   | PASS |
--
Acceptance.Ozone :: Smoke test to start cluster with docker-compos... | PASS |
7 critical tests, 7 passed, 0 failed
7 tests total, 7 passed, 0 failed
==
Acceptance| PASS |
7 critical tests, 7 passed, 0 failed
7 tests total, 7 passed, 0 failed
==
{noformat}

Patch [^HDDS-18.003.patch] looks good to me, +1 (non-binding).

> Ozone: Ozone Shell should use RestClient and RpcClient
> --
>
> Key: HDDS-18
> URL: https://issues.apache.org/jira/browse/HDDS-18
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-18.001.patch, HDDS-18.002.patch, HDDS-18.003.patch, 
> HDFS-13431-HDFS-7240.001.patch, HDFS-13431-HDFS-7240.002.patch, 
> HDFS-13431-HDFS-7240.003.patch, HDFS-13431.001.patch, HDFS-13431.002.patch
>
>
> Currently Ozone Shell uses OzoneRestClient. We should use both RestClient and 
> RpcClient instead of OzoneRestClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13539) DFSInputStream NPE when reportCheckSumFailure

2018-05-09 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469077#comment-16469077
 ] 

Xiao Chen edited comment on HDFS-13539 at 5/9/18 5:55 PM:
--

Failed tests does not look related.

For clarity:
- The synchronized changes are only for following best practice, I don't see 
any real issues from there that could cause the NPE
- Existing checks guards {{pos}} and {{blockEnd}} pretty well, to make sure 
{{blockSeekTo}} can return a valid {{currentLocatedBlock}}. This is presumably 
why the NPE is only seen on striped streams. But regardless, having the 
possibility of NPE will mask the original exception, so this patch proposed to 
improve on that end.


was (Author: xiaochen):
Failed tests does not look related.
The synchronized changes are only for following best practice, I don't see any 
real issues from there that could cause the NPE

> DFSInputStream NPE when reportCheckSumFailure
> -
>
> Key: HDFS-13539
> URL: https://issues.apache.org/jira/browse/HDFS-13539
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13539.01.patch
>
>
> We have seem the following exception with DFSStripedInputStream.
> {noformat}
> readDirect: FSDataInputStream#read error:
> NullPointerException: java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:402)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:147)
> {noformat}
> Line 402 is {{reportCheckSumFailure}}, and {{currentLocatedBlock}} is the 
> only possible null object. (Because {{currentLocatedBlock.getLocations()}} 
> cannot be null - {{LocatedBlock}} constructor checks {{locs}} and would 
> assign a {{EMPTY_LOCS}} if it's null)
> Original exception is masked by the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13539) DFSInputStream NPE when reportCheckSumFailure

2018-05-09 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13539:
-
Description: 
We have seem the following exception with DFSStripedInputStream.
{noformat}
readDirect: FSDataInputStream#read error:
NullPointerException: java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:402)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:147)
{noformat}
Line 402 is {{reportCheckSumFailure}}, and {{currentLocatedBlock}} is the only 
possible null object. (Because {{currentLocatedBlock.getLocations()}} cannot be 
null - {{LocatedBlock}} constructor checks {{locs}} and would assign a 
{{EMPTY_LOCS}} if it's null)

Original exception is masked by the NPE.

  was:
We have seem the following exception with DFSStripedInputStream.
{noformat}
readDirect: FSDataInputStream#read error:
NullPointerException: java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:402)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:147)
{noformat}
Line 402 is {{reportCheckSumFailure}}, and {{currentLocatedBlock}} is the only 
possible null object.

Original exception is masked by the NPE.


> DFSInputStream NPE when reportCheckSumFailure
> -
>
> Key: HDFS-13539
> URL: https://issues.apache.org/jira/browse/HDFS-13539
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13539.01.patch
>
>
> We have seem the following exception with DFSStripedInputStream.
> {noformat}
> readDirect: FSDataInputStream#read error:
> NullPointerException: java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:402)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:147)
> {noformat}
> Line 402 is {{reportCheckSumFailure}}, and {{currentLocatedBlock}} is the 
> only possible null object. (Because {{currentLocatedBlock.getLocations()}} 
> cannot be null - {{LocatedBlock}} constructor checks {{locs}} and would 
> assign a {{EMPTY_LOCS}} if it's null)
> Original exception is masked by the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-05-09 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469168#comment-16469168
 ] 

Rushabh S Shah commented on HDFS-13448:
---

[~belugabehr]: Could you please add some test cases for this enhancement ?

> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 2.9.0, 3.0.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13448.1.patch, HDFS-13448.2.patch, 
> HDFS-13448.3.patch, HDFS-13448.4.patch, HDFS-13448.5.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13541) NameNode Port based selective encryption

2018-05-09 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469155#comment-16469155
 ] 

Chen Liang commented on HDFS-13541:
---

[~benoyantony], would you be interested in taking a look?

cc. [~shv], [~zhz] and [~xkrogen].

> NameNode Port based selective encryption
> 
>
> Key: HDFS-13541
> URL: https://issues.apache.org/jira/browse/HDFS-13541
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, security
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: NameNode Port based selective encryption-v1.pdf
>
>
> Here at LinkedIn, one issue we face is that we need to enforce different 
> security requirement based on the location of client and the cluster. 
> Specifically, for clients from outside of the data center, it is required by 
> regulation that all traffic must be encrypted. But for clients within the 
> same data center, unencrypted connections are more desired to avoid the high 
> encryption overhead. 
> HADOOP-10221 introduced pluggable SASL resolver, based on which HADOOP-10335 
> introduced WhitelistBasedResolver which solves the same problem. However we 
> found it difficult to fit into our environment for several reasons. In this 
> JIRA, on top of pluggable SASL resolver, *we propose a different approach of 
> running RPC two ports on NameNode, and the two ports will be enforcing 
> encrypted and unencrypted connections respectively, and the following 
> DataNode access will simply follow the same behaviour of 
> encryption/unencryption*. Then by blocking unencrypted port on datacenter 
> firewall, we can completely block unencrypted external access.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13541) NameNode Port based selective encryption

2018-05-09 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-13541:
--
Attachment: NameNode Port based selective encryption-v1.pdf

> NameNode Port based selective encryption
> 
>
> Key: HDFS-13541
> URL: https://issues.apache.org/jira/browse/HDFS-13541
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, security
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: NameNode Port based selective encryption-v1.pdf
>
>
> Here at LinkedIn, one issue we face is that we need to enforce different 
> security requirement based on the location of client and the cluster. 
> Specifically, for clients from outside of the data center, it is required by 
> regulation that all traffic must be encrypted. But for clients within the 
> same data center, unencrypted connections are more desired to avoid the high 
> encryption overhead. 
> HADOOP-10221 introduced pluggable SASL resolver, based on which HADOOP-10335 
> introduced WhitelistBasedResolver which solves the same problem. However we 
> found it difficult to fit into our environment for several reasons. In this 
> JIRA, on top of pluggable SASL resolver, *we propose a different approach of 
> running RPC two ports on NameNode, and the two ports will be enforcing 
> encrypted and unencrypted connections respectively, and the following 
> DataNode access will simply follow the same behaviour of 
> encryption/unencryption*. Then by blocking unencrypted port on datacenter 
> firewall, we can completely block unencrypted external access.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >