[jira] [Commented] (HDFS-15051) RBF: Propose to revoke WRITE MountTableEntry privilege to super user only

2019-12-11 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16994281#comment-16994281
 ] 

Xiaoqiao He commented on HDFS-15051:


{quote}We can change to have just superuser access, if only the admin is 
supposed to handle the mount points and we don't expect any user to be directly 
operating on the Router Mount points. {quote}
I agree with that one hundred percent.
Another side, if we open the update mount point privilege to all end user, 
there will be some other issues. Such as, end user change someone mount point 
from one namespace to another, but do not transfer data meanwhile, we could not 
keep the data consistency guarantee.
{quote}User isn't suppose to know there is a router, then why he should have 
rights on Router Operations
{quote}
+1. I believe we should keep mount points transparently to end user. So it is 
reasonable to revoke this privilege in my own opinion. Any more information or 
consideration about the initial design will be better to determine what we 
should step forward.
cc [~linyiqun] would you mind to offer some more suggestions?

> RBF: Propose to revoke WRITE MountTableEntry privilege to super user only
> -
>
> Key: HDFS-15051
> URL: https://issues.apache.org/jira/browse/HDFS-15051
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-15051.001.patch
>
>
> The current permission checker of #MountTableStoreImpl is not very restrict. 
> In some case, any user could add/update/remove MountTableEntry without the 
> expected permission checking.
> The following code segment try to check permission when operate 
> MountTableEntry, however mountTable object is from Client/RouterAdmin 
> {{MountTable mountTable = request.getEntry();}}, and user could pass any mode 
> which could bypass the permission checker.
> {code:java}
>   public void checkPermission(MountTable mountTable, FsAction access)
>   throws AccessControlException {
> if (isSuperUser()) {
>   return;
> }
> FsPermission mode = mountTable.getMode();
> if (getUser().equals(mountTable.getOwnerName())
> && mode.getUserAction().implies(access)) {
>   return;
> }
> if (isMemberOfGroup(mountTable.getGroupName())
> && mode.getGroupAction().implies(access)) {
>   return;
> }
> if (!getUser().equals(mountTable.getOwnerName())
> && !isMemberOfGroup(mountTable.getGroupName())
> && mode.getOtherAction().implies(access)) {
>   return;
> }
> throw new AccessControlException(
> "Permission denied while accessing mount table "
> + mountTable.getSourcePath()
> + ": user " + getUser() + " does not have " + access.toString()
> + " permissions.");
>   }
> {code}
> I just propose revoke WRITE MountTableEntry privilege to super user only.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15052) WebHDFS getTrashRoot leads to OOM due to FileSystem object creation

2019-12-11 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki reassigned HDFS-15052:
---

Assignee: Masatake Iwasaki

> WebHDFS getTrashRoot leads to OOM due to FileSystem object creation
> ---
>
> Key: HDFS-15052
> URL: https://issues.apache.org/jira/browse/HDFS-15052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Masatake Iwasaki
>Priority: Major
>
> Quoting [~daryn] in HDFS-10756 :
> {quote}Surprised nobody has discovered this will lead to an inevitable OOM in 
> the NN. The NN should not be creating filesystems to itself, and must never 
> create filesystems in a remote user's context or the cache will explode.
> {quote}
> I guess the problem lies in side NamenodeWebHdfsMethods#getTrashRoot
> {code:java}
> private static String getTrashRoot(String fullPath,
> Configuration conf) throws IOException {
>   FileSystem fs = FileSystem.get(conf != null ? conf : new 
> Configuration());
>   return fs.getTrashRoot(
>   new org.apache.hadoop.fs.Path(fullPath)).toUri().getPath();
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15046) Backport HDFS-7060 to branch-2.10

2019-12-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16994274#comment-16994274
 ] 

Hadoop QA commented on HDFS-15046:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 10s{color} 
| {color:red} HDFS-15046 does not apply to branch-2. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-15046 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12988644/HDFS-15046.branch-2.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28514/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Backport HDFS-7060 to branch-2.10
> -
>
> Key: HDFS-15046
> URL: https://issues.apache.org/jira/browse/HDFS-15046
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-15046.branch-2.001.patch, 
> HDFS-15046.branch-2.9.001.patch, HDFS-15046.branch-2.9.002.patch
>
>
> Not sure why it didn't get backported in 2.x before, but looks like a good 
> improvement overall.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15046) Backport HDFS-7060 to branch-2.10

2019-12-11 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-15046:
---
Attachment: HDFS-15046.branch-2.001.patch

> Backport HDFS-7060 to branch-2.10
> -
>
> Key: HDFS-15046
> URL: https://issues.apache.org/jira/browse/HDFS-15046
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-15046.branch-2.001.patch, 
> HDFS-15046.branch-2.9.001.patch, HDFS-15046.branch-2.9.002.patch
>
>
> Not sure why it didn't get backported in 2.x before, but looks like a good 
> improvement overall.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14978) In-place Erasure Coding Conversion

2019-12-11 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16994240#comment-16994240
 ] 

Ayush Saxena edited comment on HDFS-14978 at 12/12/19 5:56 AM:
---

Thanx [~weichiu] for the design, couple of doubts  :
* Does the swapBlockList() Takes care of storage quota validations?
* How is storage Policies handled here? do you retain it, There are some 
policies not supported for EC, like ONE_SSD.
* What would be response, say you want to switch to 6-3 and you are able fetch 
only 7 blocks, unable to write two parities, do we allow moving further, 
despite moving into chances of making data more vulnerable? or do we fail?
* The Overhead of an extra tmp file still says?
* I need to check this but, If a client is reading a file, the swapBlocks() 
happens the block locations get changed, he shall go to refetch the locations 
from namenode, that would be now EC Blocks, The client would be on 
DFSInputStream not on DFSStripedInputStream, I might have messed up, but is 
there a cover to this?


was (Author: ayushtkn):
Thanx [~weichiu] for the design, couple of doubts  :
* Does the swapBlockList() Takes care of storage quota validations?
* How is storage Policies handled here? do you retain it, There are some 
policies not supported for EC, like ONE_SSD.
* What would be response, say you want to switch to 6-3 and you are able fetch 
only 7 blocks, unable to write two parities, do we allow moving further, 
despite moving into chances of making data more vulnerable? or do we fail?
* The Overhead of an extra tmp file still says?


> In-place Erasure Coding Conversion
> --
>
> Key: HDFS-14978
> URL: https://issues.apache.org/jira/browse/HDFS-14978
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: In-place Erasure Coding Conversion.pdf
>
>
> HDFS Erasure Coding is a new feature added in Apache Hadoop 3.0. It uses 
> encoding algorithms to reduce disk space usage while retaining redundancy 
> necessary for data recovery. It was a huge amount of work but it is just 
> getting adopted after almost 2 years.
> One usability problem that’s blocking users from adopting HDFS Erasure Coding 
> is that existing replicated files have to be copied to an EC-enabled 
> directory explicitly. Renaming a file/directory to an EC-enabled directory 
> does not automatically convert the blocks. Therefore users typically perform 
> the following steps to erasure-code existing files:
> {noformat}
> Create $tmp directory, set EC policy at it
> Distcp $src to $tmp
> Delete $src (rm -rf $src)
> mv $tmp $src
> {noformat}
> There are several reasons why this is not popular:
> * Complex. The process involves several steps: distcp data to a temporary 
> destination; delete source file; move destination to the source path.
> * Availability: there is a short period where nothing exists at the source 
> path, and jobs may fail unexpectedly.
> * Overhead. During the copy phase, there is a point in time where all of 
> source and destination files exist at the same time, exhausting disk space.
> * Not snapshot-friendly. If a snapshot is taken prior to performing the 
> conversion, the source (replicated) files will be preserved in the cluster 
> too. Therefore, the conversion actually increase storage space usage.
> * Not management-friendly. This approach changes file inode number, 
> modification time and access time. Erasure coded files are supposed to store 
> cold data, but this conversion makes data “hot” again.
> * Bulky. It’s either all or nothing. The directory may be partially erasure 
> coded, but this approach simply erasure code everything again.
> To ease data management, we should offer a utility tool to convert replicated 
> files to erasure coded files in-place.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15012) NN fails to parse Edit logs after applying HDFS-13101

2019-12-11 Thread Shashikant Banerjee (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16994244#comment-16994244
 ] 

Shashikant Banerjee commented on HDFS-15012:


Thanks [~szetszwo] . Patch v1 addresses the checkstyle issues.

> NN fails to parse Edit logs after applying HDFS-13101
> -
>
> Key: HDFS-15012
> URL: https://issues.apache.org/jira/browse/HDFS-15012
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nn
>Reporter: Eric Lin
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-15012.000.patch, HDFS-15012.001.patch
>
>
> After applying HDFS-13101, and deleting and creating large number of 
> snapshots, SNN exited with below error:
>   
> {code:sh}
> 2019-11-18 08:28:06,528 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: Encountered exception 
> on operation DeleteSnapshotOp [snapshotRoot=/path/to/hdfs/file, 
> snapshotName=distcp-3479-31-old, 
> RpcClientId=b16a6cb5-bdbb-45ae-9f9a-f7dc57931f37, Rpc
> CallId=1]
> java.lang.AssertionError: Element already exists: 
> element=partition_isactive=true, DELETED=[partition_isactive=true]
> at org.apache.hadoop.hdfs.util.Diff.insert(Diff.java:193)
> at org.apache.hadoop.hdfs.util.Diff.delete(Diff.java:239)
> at org.apache.hadoop.hdfs.util.Diff.combinePosterior(Diff.java:462)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature$DirectoryDiff$2.initChildren(DirectoryWithSnapshotFeature.java:240)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature$DirectoryDiff$2.iterator(DirectoryWithSnapshotFeature.java:250)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtreeRecursively(INodeDirectory.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature.cleanDirectory(DirectoryWithSnapshotFeature.java:753)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtree(INodeDirectory.java:790)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeReference.cleanSubtree(INodeReference.java:332)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeReference$WithName.cleanSubtree(INodeReference.java:583)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtreeRecursively(INodeDirectory.java:760)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature.cleanDirectory(DirectoryWithSnapshotFeature.java:753)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtree(INodeDirectory.java:790)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.removeSnapshot(DirectorySnapshottableFeature.java:235)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.removeSnapshot(INodeDirectory.java:259)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:301)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:688)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:232)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:141)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:903)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:756)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:324)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1144)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:796)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:844)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:823)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1547)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615)
> {code}
> We confirmed that fsimage and edit files were NOT corrupted, as reverting 
> HDFS-13101 fixed the issue. So the logic introduced in HDFS-13101 is broken 
> and failed to parse edit log files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: 

[jira] [Updated] (HDFS-15012) NN fails to parse Edit logs after applying HDFS-13101

2019-12-11 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-15012:
---
Attachment: HDFS-15012.001.patch

> NN fails to parse Edit logs after applying HDFS-13101
> -
>
> Key: HDFS-15012
> URL: https://issues.apache.org/jira/browse/HDFS-15012
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nn
>Reporter: Eric Lin
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-15012.000.patch, HDFS-15012.001.patch
>
>
> After applying HDFS-13101, and deleting and creating large number of 
> snapshots, SNN exited with below error:
>   
> {code:sh}
> 2019-11-18 08:28:06,528 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: Encountered exception 
> on operation DeleteSnapshotOp [snapshotRoot=/path/to/hdfs/file, 
> snapshotName=distcp-3479-31-old, 
> RpcClientId=b16a6cb5-bdbb-45ae-9f9a-f7dc57931f37, Rpc
> CallId=1]
> java.lang.AssertionError: Element already exists: 
> element=partition_isactive=true, DELETED=[partition_isactive=true]
> at org.apache.hadoop.hdfs.util.Diff.insert(Diff.java:193)
> at org.apache.hadoop.hdfs.util.Diff.delete(Diff.java:239)
> at org.apache.hadoop.hdfs.util.Diff.combinePosterior(Diff.java:462)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature$DirectoryDiff$2.initChildren(DirectoryWithSnapshotFeature.java:240)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature$DirectoryDiff$2.iterator(DirectoryWithSnapshotFeature.java:250)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtreeRecursively(INodeDirectory.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature.cleanDirectory(DirectoryWithSnapshotFeature.java:753)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtree(INodeDirectory.java:790)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeReference.cleanSubtree(INodeReference.java:332)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeReference$WithName.cleanSubtree(INodeReference.java:583)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtreeRecursively(INodeDirectory.java:760)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature.cleanDirectory(DirectoryWithSnapshotFeature.java:753)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtree(INodeDirectory.java:790)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.removeSnapshot(DirectorySnapshottableFeature.java:235)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.removeSnapshot(INodeDirectory.java:259)
> at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:301)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:688)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:232)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:141)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:903)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:756)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:324)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1144)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:796)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:844)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:823)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1547)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615)
> {code}
> We confirmed that fsimage and edit files were NOT corrupted, as reverting 
> HDFS-13101 fixed the issue. So the logic introduced in HDFS-13101 is broken 
> and failed to parse edit log files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, 

[jira] [Commented] (HDFS-14978) In-place Erasure Coding Conversion

2019-12-11 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16994240#comment-16994240
 ] 

Ayush Saxena commented on HDFS-14978:
-

Thanx [~weichiu] for the design, couple of doubts  :
* Does the swapBlockList() Takes care of storage quota validations?
* How is storage Policies handled here? do you retain it, There are some 
policies not supported for EC, like ONE_SSD.
* What would be response, say you want to switch to 6-3 and you are able fetch 
only 7 blocks, unable to write two parities, do we allow moving further, 
despite moving into chances of making data more vulnerable? or do we fail?
* The Overhead of an extra tmp file still says?


> In-place Erasure Coding Conversion
> --
>
> Key: HDFS-14978
> URL: https://issues.apache.org/jira/browse/HDFS-14978
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: In-place Erasure Coding Conversion.pdf
>
>
> HDFS Erasure Coding is a new feature added in Apache Hadoop 3.0. It uses 
> encoding algorithms to reduce disk space usage while retaining redundancy 
> necessary for data recovery. It was a huge amount of work but it is just 
> getting adopted after almost 2 years.
> One usability problem that’s blocking users from adopting HDFS Erasure Coding 
> is that existing replicated files have to be copied to an EC-enabled 
> directory explicitly. Renaming a file/directory to an EC-enabled directory 
> does not automatically convert the blocks. Therefore users typically perform 
> the following steps to erasure-code existing files:
> {noformat}
> Create $tmp directory, set EC policy at it
> Distcp $src to $tmp
> Delete $src (rm -rf $src)
> mv $tmp $src
> {noformat}
> There are several reasons why this is not popular:
> * Complex. The process involves several steps: distcp data to a temporary 
> destination; delete source file; move destination to the source path.
> * Availability: there is a short period where nothing exists at the source 
> path, and jobs may fail unexpectedly.
> * Overhead. During the copy phase, there is a point in time where all of 
> source and destination files exist at the same time, exhausting disk space.
> * Not snapshot-friendly. If a snapshot is taken prior to performing the 
> conversion, the source (replicated) files will be preserved in the cluster 
> too. Therefore, the conversion actually increase storage space usage.
> * Not management-friendly. This approach changes file inode number, 
> modification time and access time. Erasure coded files are supposed to store 
> cold data, but this conversion makes data “hot” again.
> * Bulky. It’s either all or nothing. The directory may be partially erasure 
> coded, but this approach simply erasure code everything again.
> To ease data management, we should offer a utility tool to convert replicated 
> files to erasure coded files in-place.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15051) RBF: Propose to revoke WRITE MountTableEntry privilege to super user only

2019-12-11 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16994230#comment-16994230
 ] 

Ayush Saxena commented on HDFS-15051:
-

bq. Importantly, on top of this, the super user should have permissions to 
override all this (I'm not sure if this is true right now).

I think it is there :

{code:java}
  public void checkPermission(MountTable mountTable, FsAction access)
  throws AccessControlException {
if (isSuperUser()) {
  return;
}
{code}

We can change to have just superuser access, if only the admin is supposed to 
handle the mount points and we don't expect any user to be directly operating 
on the Router Mount points. I am not sure about the initial idea, but I suppose 
the users aren't suppose to know even they are talking to Router, RBF provides 
a global namespace to the end users and appears as the user is directly talking 
to the namenode, without any Router being in between. If that is to be 
believed, then User isn't suppose to know there is a router, then why he should 
have rights on Router Operations. By this logic we can do this change. (I am 
not sure about how compat guidelines sees it).

[~elgoiri] may give a better idea about the initial design...


> RBF: Propose to revoke WRITE MountTableEntry privilege to super user only
> -
>
> Key: HDFS-15051
> URL: https://issues.apache.org/jira/browse/HDFS-15051
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-15051.001.patch
>
>
> The current permission checker of #MountTableStoreImpl is not very restrict. 
> In some case, any user could add/update/remove MountTableEntry without the 
> expected permission checking.
> The following code segment try to check permission when operate 
> MountTableEntry, however mountTable object is from Client/RouterAdmin 
> {{MountTable mountTable = request.getEntry();}}, and user could pass any mode 
> which could bypass the permission checker.
> {code:java}
>   public void checkPermission(MountTable mountTable, FsAction access)
>   throws AccessControlException {
> if (isSuperUser()) {
>   return;
> }
> FsPermission mode = mountTable.getMode();
> if (getUser().equals(mountTable.getOwnerName())
> && mode.getUserAction().implies(access)) {
>   return;
> }
> if (isMemberOfGroup(mountTable.getGroupName())
> && mode.getGroupAction().implies(access)) {
>   return;
> }
> if (!getUser().equals(mountTable.getOwnerName())
> && !isMemberOfGroup(mountTable.getGroupName())
> && mode.getOtherAction().implies(access)) {
>   return;
> }
> throw new AccessControlException(
> "Permission denied while accessing mount table "
> + mountTable.getSourcePath()
> + ": user " + getUser() + " does not have " + access.toString()
> + " permissions.");
>   }
> {code}
> I just propose revoke WRITE MountTableEntry privilege to super user only.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15027) Correct target DN's log while balancing.

2019-12-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16994212#comment-16994212
 ] 

Hadoop QA commented on HDFS-15027:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
31s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}165m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestFsck |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:e573ea49085 |
| JIRA Issue | HDFS-15027 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12988628/HDFS-15027.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 22fa40b3b210 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 93bb368 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28512/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28512/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-15051) RBF: Propose to revoke WRITE MountTableEntry privilege to super user only

2019-12-11 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16994154#comment-16994154
 ] 

Xiaoqiao He commented on HDFS-15051:


Thanks [~elgoiri] for offering the design consideration, However it seems that 
goals is not met in current implementation, and there are still security 
vulnerabilities, I mean MountTableEntry operation privilege. Consider the 
following case, when one mount entry attribution as following, it seems that 
any user such as `anonymous` could update it use command as `bin/hdfs 
dfsrouteradmin -update /user nameservice /user2 -owner hdfs -group hadoop -mode 
700` since permission checker of #RouterAdminServer only rely on the 
information(user,group,mode) which supply from client. Please correct me if 
something i missed.
{code:java}
/user   nameservice /user   hdfshadoop  rwxr-xr-x
{code}
So, I propose we could enhance privilege control and avoid malicious updates, 
even revoke update MountTableEntry privilege to super user only.

> RBF: Propose to revoke WRITE MountTableEntry privilege to super user only
> -
>
> Key: HDFS-15051
> URL: https://issues.apache.org/jira/browse/HDFS-15051
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-15051.001.patch
>
>
> The current permission checker of #MountTableStoreImpl is not very restrict. 
> In some case, any user could add/update/remove MountTableEntry without the 
> expected permission checking.
> The following code segment try to check permission when operate 
> MountTableEntry, however mountTable object is from Client/RouterAdmin 
> {{MountTable mountTable = request.getEntry();}}, and user could pass any mode 
> which could bypass the permission checker.
> {code:java}
>   public void checkPermission(MountTable mountTable, FsAction access)
>   throws AccessControlException {
> if (isSuperUser()) {
>   return;
> }
> FsPermission mode = mountTable.getMode();
> if (getUser().equals(mountTable.getOwnerName())
> && mode.getUserAction().implies(access)) {
>   return;
> }
> if (isMemberOfGroup(mountTable.getGroupName())
> && mode.getGroupAction().implies(access)) {
>   return;
> }
> if (!getUser().equals(mountTable.getOwnerName())
> && !isMemberOfGroup(mountTable.getGroupName())
> && mode.getOtherAction().implies(access)) {
>   return;
> }
> throw new AccessControlException(
> "Permission denied while accessing mount table "
> + mountTable.getSourcePath()
> + ": user " + getUser() + " does not have " + access.toString()
> + " permissions.");
>   }
> {code}
> I just propose revoke WRITE MountTableEntry privilege to super user only.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14983) RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration command option

2019-12-11 Thread Xieming Li (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16994089#comment-16994089
 ] 

Xieming Li commented on HDFS-14983:
---

[~tasanuma][~inigoiri]
Thank you for your review.

> RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration command option
> ---
>
> Key: HDFS-14983
> URL: https://issues.apache.org/jira/browse/HDFS-14983
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14983.002.patch, HDFS-14983.003.patch, 
> HDFS-14983.004.patch, HDFS-14983.005.patch, HDFS-14983.006.patch, 
> HDFS-14983.draft.001.patch
>
>
> NameNode can update proxyuser config by -refreshSuperUserGroupsConfiguration 
> without restarting but DFSRouter cannot. It would be better for DFSRouter to 
> have such functionality to be compatible with NameNode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15027) Correct target DN's log while balancing.

2019-12-11 Thread Xudong Cao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16994082#comment-16994082
 ] 

Xudong Cao edited comment on HDFS-15027 at 12/12/19 2:14 AM:
-

[~weichiu] Perhaps we should keep the keyword "Moved" to reflect the meaning of 
moving block during balance, just like :

*Moved BLOCK complete, copied from PROXY DN, initiated by* *BALANCER*


was (Author: xudongcao):
[~weichiu] Perhaps we should keep the keyword "Moved" to reflect the meaning of 
moving block during balance, just like:
{code:java}
2019-12-12 10:06:34,791 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Moved BP-1360308441-192.168.202.11-1576116241828:blk_1073741872_1048 complete, 
copied from /192.168.202.11:9866, initiated by /192.168.202.13:53536, 
delHint=c70406f8-a815-4f6f-bdf0-fd3661bd6920{code}

> Correct target DN's log while balancing.
> 
>
> Key: HDFS-15027
> URL: https://issues.apache.org/jira/browse/HDFS-15027
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 3.2.1
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Minor
> Attachments: HDFS-15027.000.patch, HDFS-15027.001.patch
>
>
> During HDFS balancing, after the target DN copied a block from the proxy DN, 
> it prints a log following the pattern below:
> *Moved BLOCK from BALANCER*
> This is wrong and misleading, maybe we can improve the pattern like:
> *Moved BLOCK complete, copied from PROXY DN, initiated by* *BALANCER*
>  
> An example log of target DN during balancing:
> 1. Wrong log printing before jira:
> {code:java}
> 2019-12-04 09:33:19,718 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Moved BP-1426342230-192.168.202.11-1575277482603:blk_1073741889_1065 from 
> /192.168.202.13:56322, delHint=54a14a41-0d7c-4487-b4f0-ce2848f86b48{code}
> 2. Correct log printing after jira:
> {code:java}
> 2019-12-12 10:06:34,791 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Moved BP-1360308441-192.168.202.11-1576116241828:blk_1073741872_1048 
> complete, copied from /192.168.202.11:9866, initiated by 
> /192.168.202.13:53536, delHint=c70406f8-a815-4f6f-bdf0-fd3661bd6920{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15027) Correct target DN's log while balancing.

2019-12-11 Thread Xudong Cao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Cao updated HDFS-15027:
--
Attachment: HDFS-15027.001.patch

> Correct target DN's log while balancing.
> 
>
> Key: HDFS-15027
> URL: https://issues.apache.org/jira/browse/HDFS-15027
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 3.2.1
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Minor
> Attachments: HDFS-15027.000.patch, HDFS-15027.001.patch
>
>
> During HDFS balancing, after the target DN copied a block from the proxy DN, 
> it prints a log following the pattern below:
> *Moved BLOCK from BALANCER*
> This is wrong and misleading, maybe we can improve the pattern like:
> *Moved BLOCK complete, copied from PROXY DN, initiated by* *BALANCER*
>  
> An example log of target DN during balancing:
> 1. Wrong log printing before jira:
> {code:java}
> 2019-12-04 09:33:19,718 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Moved BP-1426342230-192.168.202.11-1575277482603:blk_1073741889_1065 from 
> /192.168.202.13:56322, delHint=54a14a41-0d7c-4487-b4f0-ce2848f86b48{code}
> 2. Correct log printing after jira:
> {code:java}
> 2019-12-12 10:06:34,791 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Moved BP-1360308441-192.168.202.11-1576116241828:blk_1073741872_1048 
> complete, copied from /192.168.202.11:9866, initiated by 
> /192.168.202.13:53536, delHint=c70406f8-a815-4f6f-bdf0-fd3661bd6920{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14983) RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration command option

2019-12-11 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16994084#comment-16994084
 ] 

Hudson commented on HDFS-14983:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17754 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17754/])
HDFS-14983. RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration 
(tasanuma: rev 93bb368094e48e752c0732d979fbcd24e432bfc1)
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/RefreshSuperUserGroupsConfigurationRequest.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RefreshSuperUserGroupsConfigurationRequestPBImpl.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRefreshSuperUserGroupsConfiguration.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClient.java
* (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/RouterProtocol.proto
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RefreshSuperUserGroupsConfigurationResponsePBImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/FederationProtocol.proto
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocol.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/RefreshSuperUserGroupsConfigurationResponse.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/RouterGenericManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolTranslatorPB.java


> RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration command option
> ---
>
> Key: HDFS-14983
> URL: https://issues.apache.org/jira/browse/HDFS-14983
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14983.002.patch, HDFS-14983.003.patch, 
> HDFS-14983.004.patch, HDFS-14983.005.patch, HDFS-14983.006.patch, 
> HDFS-14983.draft.001.patch
>
>
> NameNode can update proxyuser config by -refreshSuperUserGroupsConfiguration 
> without restarting but DFSRouter cannot. It would be better for DFSRouter to 
> have such functionality to be compatible with NameNode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15027) Correct target DN's log while balancing.

2019-12-11 Thread Xudong Cao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16994082#comment-16994082
 ] 

Xudong Cao edited comment on HDFS-15027 at 12/12/19 2:11 AM:
-

[~weichiu] Perhaps we should keep the keyword "Moved" to reflect the meaning of 
moving block during balance, just like:
{code:java}
2019-12-12 10:06:34,791 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Moved BP-1360308441-192.168.202.11-1576116241828:blk_1073741872_1048 complete, 
copied from /192.168.202.11:9866, initiated by /192.168.202.13:53536, 
delHint=c70406f8-a815-4f6f-bdf0-fd3661bd6920{code}


was (Author: xudongcao):
[~weichiu] Perhaps we should keep the keyword "Moved" to reflect the meaning of 
moving block during balance, just like:
{code:java}
2019-12-12 10:06:34,791 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Moved BP-1360308441-192.168.202.11-1576116241828:blk_1073741872_1048 complete, 
copied from /192.168.202.11:9866, initiated by 
/192.168.202.13:53536,delHint=c70406f8-a815-4f6f-bdf0-fd3661bd6920{code}

> Correct target DN's log while balancing.
> 
>
> Key: HDFS-15027
> URL: https://issues.apache.org/jira/browse/HDFS-15027
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 3.2.1
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Minor
> Attachments: HDFS-15027.000.patch
>
>
> During HDFS balancing, after the target DN copied a block from the proxy DN, 
> it prints a log following the pattern below:
> *Moved BLOCK from BALANCER*
> This is wrong and misleading, maybe we can improve the pattern like:
> *Moved BLOCK complete, copied from PROXY DN, initiated by* *BALANCER*
>  
> An example log of target DN during balancing:
> 1. Wrong log printing before jira:
> {code:java}
> 2019-12-04 09:33:19,718 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Moved BP-1426342230-192.168.202.11-1575277482603:blk_1073741889_1065 from 
> /192.168.202.13:56322, delHint=54a14a41-0d7c-4487-b4f0-ce2848f86b48{code}
> 2. Correct log printing after jira:
> {code:java}
> 2019-12-12 10:06:34,791 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Moved BP-1360308441-192.168.202.11-1576116241828:blk_1073741872_1048 
> complete, copied from /192.168.202.11:9866, initiated by 
> /192.168.202.13:53536, delHint=c70406f8-a815-4f6f-bdf0-fd3661bd6920{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15027) Correct target DN's log while balancing.

2019-12-11 Thread Xudong Cao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Cao updated HDFS-15027:
--
Description: 
During HDFS balancing, after the target DN copied a block from the proxy DN, it 
prints a log following the pattern below:

*Moved BLOCK from BALANCER*

This is wrong and misleading, maybe we can improve the pattern like:

*Moved BLOCK complete, copied from PROXY DN, initiated by* *BALANCER*

 

An example log of target DN during balancing:

1. Wrong log printing before jira:
{code:java}
2019-12-04 09:33:19,718 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Moved BP-1426342230-192.168.202.11-1575277482603:blk_1073741889_1065 from 
/192.168.202.13:56322, delHint=54a14a41-0d7c-4487-b4f0-ce2848f86b48{code}
2. Correct log printing after jira:
{code:java}
2019-12-12 10:06:34,791 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Moved BP-1360308441-192.168.202.11-1576116241828:blk_1073741872_1048 complete, 
copied from /192.168.202.11:9866, initiated by /192.168.202.13:53536, 
delHint=c70406f8-a815-4f6f-bdf0-fd3661bd6920{code}

  was:
During HDFS balancing, after the target DN copied a block from the proxy DN, it 
prints a log following the pattern below:

*Moved BLOCK from BALANCER*

This is wrong and misleading, maybe we can improve the pattern like:

*Moved BLOCK complete, copied from PROXY DN, initiated by* *BALANCER*

 

An example log of target DN during balancing:

1. Wrong log printing before jira:
{code:java}
2019-12-04 09:33:19,718 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Moved BP-1426342230-192.168.202.11-1575277482603:blk_1073741889_1065 from 
/192.168.202.13:56322, delHint=54a14a41-0d7c-4487-b4f0-ce2848f86b48{code}
2. Correct log printing after jira:
{code:java}
2019-12-04 10:32:06,707 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Copied BP-428834875-192.168.202.11-1575425340126:blk_1073741918_1094 from 
/192.168.202.11:9866, initiated by /192.168.202.13:44502, 
delHint=84a0626a-5fa4-4c66-a776-074f537d4235{code}


> Correct target DN's log while balancing.
> 
>
> Key: HDFS-15027
> URL: https://issues.apache.org/jira/browse/HDFS-15027
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 3.2.1
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Minor
> Attachments: HDFS-15027.000.patch
>
>
> During HDFS balancing, after the target DN copied a block from the proxy DN, 
> it prints a log following the pattern below:
> *Moved BLOCK from BALANCER*
> This is wrong and misleading, maybe we can improve the pattern like:
> *Moved BLOCK complete, copied from PROXY DN, initiated by* *BALANCER*
>  
> An example log of target DN during balancing:
> 1. Wrong log printing before jira:
> {code:java}
> 2019-12-04 09:33:19,718 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Moved BP-1426342230-192.168.202.11-1575277482603:blk_1073741889_1065 from 
> /192.168.202.13:56322, delHint=54a14a41-0d7c-4487-b4f0-ce2848f86b48{code}
> 2. Correct log printing after jira:
> {code:java}
> 2019-12-12 10:06:34,791 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Moved BP-1360308441-192.168.202.11-1576116241828:blk_1073741872_1048 
> complete, copied from /192.168.202.11:9866, initiated by 
> /192.168.202.13:53536, delHint=c70406f8-a815-4f6f-bdf0-fd3661bd6920{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15027) Correct target DN's log while balancing.

2019-12-11 Thread Xudong Cao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Cao updated HDFS-15027:
--
Description: 
During HDFS balancing, after the target DN copied a block from the proxy DN, it 
prints a log following the pattern below:

*Moved BLOCK from BALANCER*

This is wrong and misleading, maybe we can improve the pattern like:

*Moved BLOCK complete, copied from PROXY DN, initiated by* *BALANCER*

 

An example log of target DN during balancing:

1. Wrong log printing before jira:
{code:java}
2019-12-04 09:33:19,718 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Moved BP-1426342230-192.168.202.11-1575277482603:blk_1073741889_1065 from 
/192.168.202.13:56322, delHint=54a14a41-0d7c-4487-b4f0-ce2848f86b48{code}
2. Correct log printing after jira:
{code:java}
2019-12-04 10:32:06,707 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Copied BP-428834875-192.168.202.11-1575425340126:blk_1073741918_1094 from 
/192.168.202.11:9866, initiated by /192.168.202.13:44502, 
delHint=84a0626a-5fa4-4c66-a776-074f537d4235{code}

  was:
During HDFS balancing, after the target DN copied a block from the proxy DN, it 
prints a log following the pattern below:

*Moved BLOCK from BALANCER*

This is somehow misleading, maybe we can improve the pattern like:

*Copied BLOCK from PROXY DN, initiated by* *BALANCER*

 

An example log of target DN during balancing:

1. Wrong log printing before jira:
{code:java}
2019-12-04 09:33:19,718 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Moved BP-1426342230-192.168.202.11-1575277482603:blk_1073741889_1065 from 
/192.168.202.13:56322, delHint=54a14a41-0d7c-4487-b4f0-ce2848f86b48{code}
2. Correct log printing after jira:
{code:java}
2019-12-04 10:32:06,707 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Copied BP-428834875-192.168.202.11-1575425340126:blk_1073741918_1094 from 
/192.168.202.11:9866, initiated by /192.168.202.13:44502, 
delHint=84a0626a-5fa4-4c66-a776-074f537d4235{code}


> Correct target DN's log while balancing.
> 
>
> Key: HDFS-15027
> URL: https://issues.apache.org/jira/browse/HDFS-15027
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 3.2.1
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Minor
> Attachments: HDFS-15027.000.patch
>
>
> During HDFS balancing, after the target DN copied a block from the proxy DN, 
> it prints a log following the pattern below:
> *Moved BLOCK from BALANCER*
> This is wrong and misleading, maybe we can improve the pattern like:
> *Moved BLOCK complete, copied from PROXY DN, initiated by* *BALANCER*
>  
> An example log of target DN during balancing:
> 1. Wrong log printing before jira:
> {code:java}
> 2019-12-04 09:33:19,718 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Moved BP-1426342230-192.168.202.11-1575277482603:blk_1073741889_1065 from 
> /192.168.202.13:56322, delHint=54a14a41-0d7c-4487-b4f0-ce2848f86b48{code}
> 2. Correct log printing after jira:
> {code:java}
> 2019-12-04 10:32:06,707 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Copied BP-428834875-192.168.202.11-1575425340126:blk_1073741918_1094 from 
> /192.168.202.11:9866, initiated by /192.168.202.13:44502, 
> delHint=84a0626a-5fa4-4c66-a776-074f537d4235{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15027) Correct target DN's log while balancing.

2019-12-11 Thread Xudong Cao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16994082#comment-16994082
 ] 

Xudong Cao commented on HDFS-15027:
---

[~weichiu] Perhaps we should keep the keyword "Moved" to reflect the meaning of 
moving block during balance, just like:
{code:java}
2019-12-12 10:06:34,791 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Moved BP-1360308441-192.168.202.11-1576116241828:blk_1073741872_1048 complete, 
copied from /192.168.202.11:9866, initiated by 
/192.168.202.13:53536,delHint=c70406f8-a815-4f6f-bdf0-fd3661bd6920{code}

> Correct target DN's log while balancing.
> 
>
> Key: HDFS-15027
> URL: https://issues.apache.org/jira/browse/HDFS-15027
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 3.2.1
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Minor
> Attachments: HDFS-15027.000.patch
>
>
> During HDFS balancing, after the target DN copied a block from the proxy DN, 
> it prints a log following the pattern below:
> *Moved BLOCK from BALANCER*
> This is somehow misleading, maybe we can improve the pattern like:
> *Copied BLOCK from PROXY DN, initiated by* *BALANCER*
>  
> An example log of target DN during balancing:
> 1. Wrong log printing before jira:
> {code:java}
> 2019-12-04 09:33:19,718 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Moved BP-1426342230-192.168.202.11-1575277482603:blk_1073741889_1065 from 
> /192.168.202.13:56322, delHint=54a14a41-0d7c-4487-b4f0-ce2848f86b48{code}
> 2. Correct log printing after jira:
> {code:java}
> 2019-12-04 10:32:06,707 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Copied BP-428834875-192.168.202.11-1575425340126:blk_1073741918_1094 from 
> /192.168.202.11:9866, initiated by /192.168.202.13:44502, 
> delHint=84a0626a-5fa4-4c66-a776-074f537d4235{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15049) Fix annotation of RouterWebServices

2019-12-11 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-15049:
---
Resolution: Abandoned
Status: Resolved  (was: Patch Available)

My bad:(.

> Fix annotation of RouterWebServices
> ---
>
> Key: HDFS-15049
> URL: https://issues.apache.org/jira/browse/HDFS-15049
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Trivial
> Attachments: HDFS-15049.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14983) RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration command option

2019-12-11 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14983:

Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk.

Thanks for your contribution, [~risyomei]. Thanks for your reviews and your 
discussion, [~elgoiri], [~aajisaka] and [~ayushtkn].

> RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration command option
> ---
>
> Key: HDFS-14983
> URL: https://issues.apache.org/jira/browse/HDFS-14983
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14983.002.patch, HDFS-14983.003.patch, 
> HDFS-14983.004.patch, HDFS-14983.005.patch, HDFS-14983.006.patch, 
> HDFS-14983.draft.001.patch
>
>
> NameNode can update proxyuser config by -refreshSuperUserGroupsConfiguration 
> without restarting but DFSRouter cannot. It would be better for DFSRouter to 
> have such functionality to be compatible with NameNode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14983) RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration command option

2019-12-11 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16994068#comment-16994068
 ] 

Takanobu Asanuma commented on HDFS-14983:
-

[~elgoiri] Thanks for your review. Will commit it soon.

> RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration command option
> ---
>
> Key: HDFS-14983
> URL: https://issues.apache.org/jira/browse/HDFS-14983
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Minor
> Attachments: HDFS-14983.002.patch, HDFS-14983.003.patch, 
> HDFS-14983.004.patch, HDFS-14983.005.patch, HDFS-14983.006.patch, 
> HDFS-14983.draft.001.patch
>
>
> NameNode can update proxyuser config by -refreshSuperUserGroupsConfiguration 
> without restarting but DFSRouter cannot. It would be better for DFSRouter to 
> have such functionality to be compatible with NameNode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15046) Backport HDFS-7060 to branch-2.10

2019-12-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16994030#comment-16994030
 ] 

Hadoop QA commented on HDFS-15046:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m 
15s{color} | {color:red} Docker failed to build yetus/hadoop:c3439fff6be. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-15046 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12988623/HDFS-15046.branch-2.9.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28511/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Backport HDFS-7060 to branch-2.10
> -
>
> Key: HDFS-15046
> URL: https://issues.apache.org/jira/browse/HDFS-15046
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-15046.branch-2.9.001.patch, 
> HDFS-15046.branch-2.9.002.patch
>
>
> Not sure why it didn't get backported in 2.x before, but looks like a good 
> improvement overall.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15046) Backport HDFS-7060 to branch-2.10

2019-12-11 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-15046:
---
Attachment: HDFS-15046.branch-2.9.002.patch

> Backport HDFS-7060 to branch-2.10
> -
>
> Key: HDFS-15046
> URL: https://issues.apache.org/jira/browse/HDFS-15046
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-15046.branch-2.9.001.patch, 
> HDFS-15046.branch-2.9.002.patch
>
>
> Not sure why it didn't get backported in 2.x before, but looks like a good 
> improvement overall.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15016) RBF: getDatanodeReport() should return the latest update

2019-12-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16994021#comment-16994021
 ] 

Hadoop QA commented on HDFS-15016:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
50s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:e573ea49085 |
| JIRA Issue | HDFS-15016 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12988608/HDFS-15016.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 06d5cecd6015 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b56c08b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28510/testReport/ |
| Max. process+thread count | 2757 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28510/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: getDatanodeReport() should return the latest update
> 
>
> Key: HDFS-15016
> 

[jira] [Commented] (HDFS-14908) LeaseManager should check parent-child relationship when filter open files.

2019-12-11 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993976#comment-16993976
 ] 

Íñigo Goiri commented on HDFS-14908:


The findbug is being tracked in another JIRA and the tests are unrelated.
+1 on  [^HDFS-14908.010.patch].

> LeaseManager should check parent-child relationship when filter open files.
> ---
>
> Key: HDFS-14908
> URL: https://issues.apache.org/jira/browse/HDFS-14908
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.0.1
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
> Attachments: HDFS-14908.001.patch, HDFS-14908.002.patch, 
> HDFS-14908.003.patch, HDFS-14908.004.patch, HDFS-14908.005.patch, 
> HDFS-14908.006.patch, HDFS-14908.007.patch, HDFS-14908.008.patch, 
> HDFS-14908.009.patch, HDFS-14908.010.patch, HDFS-14908.TestV4.patch, 
> Test.java, TestV2.java, TestV3.java
>
>
> Now when doing listOpenFiles(), LeaseManager only checks whether the filter 
> path is the prefix of the open files. We should check whether the filter path 
> is the parent/ancestor of the open files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15036) Active NameNode should not silently fail the image transfer

2019-12-11 Thread Konstantin Shvachko (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993972#comment-16993972
 ] 

Konstantin Shvachko commented on HDFS-15036:


+1 on v03 patch.
TestFsck failure is tracked under HDFS-15038.
And the checkstyle warning is bogus.

> Active NameNode should not silently fail the image transfer
> ---
>
> Key: HDFS-15036
> URL: https://issues.apache.org/jira/browse/HDFS-15036
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Konstantin Shvachko
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-15036.001.patch, HDFS-15036.002.patch, 
> HDFS-15036.003.patch
>
>
> Image transfer from Standby NameNode to  Active silently fails on Active, 
> without any logging and not notifying the receiver side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15016) RBF: getDatanodeReport() should return the latest update

2019-12-11 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-15016:
---
Attachment: HDFS-15016.003.patch

> RBF: getDatanodeReport() should return the latest update
> 
>
> Key: HDFS-15016
> URL: https://issues.apache.org/jira/browse/HDFS-15016
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-15016.000.patch, HDFS-15016.001.patch, 
> HDFS-15016.002.patch, HDFS-15016.003.patch
>
>
> Currently, when the Router calls getDatanodeReport() (or 
> getDatanodeStorageReport()) and the DN is in multiple clusters, it just takes 
> the one that comes first. It should consider the latest update.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15016) RBF: getDatanodeReport() should return the latest update

2019-12-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993952#comment-16993952
 ] 

Hadoop QA commented on HDFS-15016:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 7 new + 6 unchanged - 0 fixed = 13 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
7s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:e573ea49085 |
| JIRA Issue | HDFS-15016 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12988590/HDFS-15016.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 192567f63781 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b56c08b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28509/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28509/testReport/ |
| Max. process+thread count | 2759 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28509/console |
| Powered by | Apache Yetus 0.8.0   

[jira] [Commented] (HDFS-14743) Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

2019-12-11 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993934#comment-16993934
 ] 

Wei-Chiu Chuang commented on HDFS-14743:


POC here: [https://github.com/jojochuang/hadoop/commits/HDFS-14743]

> Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to 
> support Authorization of mkdir, rm, rmdir, copy, move etc...
> ---
>
> Key: HDFS-14743
> URL: https://issues.apache.org/jira/browse/HDFS-14743
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0
>Reporter: Ramesh Mani
>Assignee: Wei-Chiu Chuang
>Priority: Critical
> Attachments: HDFS-14743 Enhance INodeAttributeProvider_ 
> AccessControlEnforcer Interface.pdf
>
>
> Enhance INodeAttributeProvider / AccessControlEnforcer Interface in HDFS to 
> support Authorization of mkdir, rm, rmdir, copy, move etc..., this should 
> help the implementors of the interface like Apache Ranger's HDFS 
> Authorization plugin to authorize and audit those command sets.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14743) Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

2019-12-11 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14743:
---
Attachment: HDFS-14743 Enhance INodeAttributeProvider_ 
AccessControlEnforcer Interface.pdf

> Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to 
> support Authorization of mkdir, rm, rmdir, copy, move etc...
> ---
>
> Key: HDFS-14743
> URL: https://issues.apache.org/jira/browse/HDFS-14743
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0
>Reporter: Ramesh Mani
>Assignee: Wei-Chiu Chuang
>Priority: Critical
> Attachments: HDFS-14743 Enhance INodeAttributeProvider_ 
> AccessControlEnforcer Interface.pdf
>
>
> Enhance INodeAttributeProvider / AccessControlEnforcer Interface in HDFS to 
> support Authorization of mkdir, rm, rmdir, copy, move etc..., this should 
> help the implementors of the interface like Apache Ranger's HDFS 
> Authorization plugin to authorize and audit those command sets.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14983) RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration command option

2019-12-11 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993915#comment-16993915
 ] 

Íñigo Goiri commented on HDFS-14983:


+1 on  [^HDFS-14983.006.patch].

> RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration command option
> ---
>
> Key: HDFS-14983
> URL: https://issues.apache.org/jira/browse/HDFS-14983
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Minor
> Attachments: HDFS-14983.002.patch, HDFS-14983.003.patch, 
> HDFS-14983.004.patch, HDFS-14983.005.patch, HDFS-14983.006.patch, 
> HDFS-14983.draft.001.patch
>
>
> NameNode can update proxyuser config by -refreshSuperUserGroupsConfiguration 
> without restarting but DFSRouter cannot. It would be better for DFSRouter to 
> have such functionality to be compatible with NameNode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15016) RBF: getDatanodeReport() should return the latest update

2019-12-11 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-15016:
---
Attachment: HDFS-15016.002.patch

> RBF: getDatanodeReport() should return the latest update
> 
>
> Key: HDFS-15016
> URL: https://issues.apache.org/jira/browse/HDFS-15016
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-15016.000.patch, HDFS-15016.001.patch, 
> HDFS-15016.002.patch
>
>
> Currently, when the Router calls getDatanodeReport() (or 
> getDatanodeStorageReport()) and the DN is in multiple clusters, it just takes 
> the one that comes first. It should consider the latest update.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15016) RBF: getDatanodeReport() should return the latest update

2019-12-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993848#comment-16993848
 ] 

Hadoop QA commented on HDFS-15016:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
51s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 7 new + 6 unchanged - 0 fixed = 13 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
14s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:e573ea49085 |
| JIRA Issue | HDFS-15016 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12988568/HDFS-15016.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux eb0be2ba6e1c 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c174d50 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28507/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28507/testReport/ |
| Max. process+thread count | 2663 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28507/console |
| Powered by | Apache Yetus 0.8.0   

[jira] [Commented] (HDFS-15049) Fix annotation of RouterWebServices

2019-12-11 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993843#comment-16993843
 ] 

Íñigo Goiri commented on HDFS-15049:


This should go to YARN not HDFS.

> Fix annotation of RouterWebServices
> ---
>
> Key: HDFS-15049
> URL: https://issues.apache.org/jira/browse/HDFS-15049
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Trivial
> Attachments: HDFS-15049.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15051) RBF: Propose to revoke WRITE MountTableEntry privilege to super user only

2019-12-11 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993837#comment-16993837
 ] 

Íñigo Goiri commented on HDFS-15051:


The original idea was to have ACLs in the mount table like if it was a regular 
file system.
The user that creates the mount table entry is free to set the permissions as 
restricted as she wants.
Importantly, on top of this, the super user should have permissions to override 
all this (I'm not sure if this is true right now).

> RBF: Propose to revoke WRITE MountTableEntry privilege to super user only
> -
>
> Key: HDFS-15051
> URL: https://issues.apache.org/jira/browse/HDFS-15051
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-15051.001.patch
>
>
> The current permission checker of #MountTableStoreImpl is not very restrict. 
> In some case, any user could add/update/remove MountTableEntry without the 
> expected permission checking.
> The following code segment try to check permission when operate 
> MountTableEntry, however mountTable object is from Client/RouterAdmin 
> {{MountTable mountTable = request.getEntry();}}, and user could pass any mode 
> which could bypass the permission checker.
> {code:java}
>   public void checkPermission(MountTable mountTable, FsAction access)
>   throws AccessControlException {
> if (isSuperUser()) {
>   return;
> }
> FsPermission mode = mountTable.getMode();
> if (getUser().equals(mountTable.getOwnerName())
> && mode.getUserAction().implies(access)) {
>   return;
> }
> if (isMemberOfGroup(mountTable.getGroupName())
> && mode.getGroupAction().implies(access)) {
>   return;
> }
> if (!getUser().equals(mountTable.getOwnerName())
> && !isMemberOfGroup(mountTable.getGroupName())
> && mode.getOtherAction().implies(access)) {
>   return;
> }
> throw new AccessControlException(
> "Permission denied while accessing mount table "
> + mountTable.getSourcePath()
> + ": user " + getUser() + " does not have " + access.toString()
> + " permissions.");
>   }
> {code}
> I just propose revoke WRITE MountTableEntry privilege to super user only.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15051) RBF: Propose to revoke WRITE MountTableEntry privilege to super user only

2019-12-11 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-15051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-15051:
---
Summary: RBF: Propose to revoke WRITE MountTableEntry privilege to super 
user only  (was: Propose to revoke WRITE MountTableEntry privilege to super 
user only)

> RBF: Propose to revoke WRITE MountTableEntry privilege to super user only
> -
>
> Key: HDFS-15051
> URL: https://issues.apache.org/jira/browse/HDFS-15051
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-15051.001.patch
>
>
> The current permission checker of #MountTableStoreImpl is not very restrict. 
> In some case, any user could add/update/remove MountTableEntry without the 
> expected permission checking.
> The following code segment try to check permission when operate 
> MountTableEntry, however mountTable object is from Client/RouterAdmin 
> {{MountTable mountTable = request.getEntry();}}, and user could pass any mode 
> which could bypass the permission checker.
> {code:java}
>   public void checkPermission(MountTable mountTable, FsAction access)
>   throws AccessControlException {
> if (isSuperUser()) {
>   return;
> }
> FsPermission mode = mountTable.getMode();
> if (getUser().equals(mountTable.getOwnerName())
> && mode.getUserAction().implies(access)) {
>   return;
> }
> if (isMemberOfGroup(mountTable.getGroupName())
> && mode.getGroupAction().implies(access)) {
>   return;
> }
> if (!getUser().equals(mountTable.getOwnerName())
> && !isMemberOfGroup(mountTable.getGroupName())
> && mode.getOtherAction().implies(access)) {
>   return;
> }
> throw new AccessControlException(
> "Permission denied while accessing mount table "
> + mountTable.getSourcePath()
> + ": user " + getUser() + " does not have " + access.toString()
> + " permissions.");
>   }
> {code}
> I just propose revoke WRITE MountTableEntry privilege to super user only.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14519) NameQuota is not update after concat operation, so namequota is wrong

2019-12-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993810#comment-16993810
 ] 

Hadoop QA commented on HDFS-14519:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m 
18s{color} | {color:red} Docker failed to build yetus/hadoop:f555aa740b5. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14519 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12988571/HDFS-14519-branch-2.10-003.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28508/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> NameQuota is not update after concat operation, so namequota is wrong
> -
>
> Key: HDFS-14519
> URL: https://issues.apache.org/jira/browse/HDFS-14519
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14519-branch-2.10-003.patch, HDFS-14519.001.patch, 
> HDFS-14519.002.patch, HDFS-14519.003.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14519) NameQuota is not update after concat operation, so namequota is wrong

2019-12-11 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993808#comment-16993808
 ] 

Ayush Saxena commented on HDFS-14519:
-

Hi [~xkrogen], Sorry for the delay, I had put this on hold till branch-2 
rename, and unfortunately I forgot. 
Have uploaded the patch, for branch-2.10, Cherry-pick was clean, but 
compilation was failing in the last line of the test for assert, just changed 
assert.assertEquals to assertEquals. Give a check once.

> NameQuota is not update after concat operation, so namequota is wrong
> -
>
> Key: HDFS-14519
> URL: https://issues.apache.org/jira/browse/HDFS-14519
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14519-branch-2.10-003.patch, HDFS-14519.001.patch, 
> HDFS-14519.002.patch, HDFS-14519.003.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-14519) NameQuota is not update after concat operation, so namequota is wrong

2019-12-11 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena reopened HDFS-14519:
-

> NameQuota is not update after concat operation, so namequota is wrong
> -
>
> Key: HDFS-14519
> URL: https://issues.apache.org/jira/browse/HDFS-14519
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14519-branch-2.10-003.patch, HDFS-14519.001.patch, 
> HDFS-14519.002.patch, HDFS-14519.003.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14519) NameQuota is not update after concat operation, so namequota is wrong

2019-12-11 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14519:

Status: Patch Available  (was: Reopened)

> NameQuota is not update after concat operation, so namequota is wrong
> -
>
> Key: HDFS-14519
> URL: https://issues.apache.org/jira/browse/HDFS-14519
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14519-branch-2.10-003.patch, HDFS-14519.001.patch, 
> HDFS-14519.002.patch, HDFS-14519.003.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14519) NameQuota is not update after concat operation, so namequota is wrong

2019-12-11 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14519:

Attachment: HDFS-14519-branch-2.10-003.patch

> NameQuota is not update after concat operation, so namequota is wrong
> -
>
> Key: HDFS-14519
> URL: https://issues.apache.org/jira/browse/HDFS-14519
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14519-branch-2.10-003.patch, HDFS-14519.001.patch, 
> HDFS-14519.002.patch, HDFS-14519.003.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15046) Backport HDFS-7060 to branch-2.10

2019-12-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993801#comment-16993801
 ] 

Hadoop QA commented on HDFS-15046:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 12m 
50s{color} | {color:red} Docker failed to build yetus/hadoop:c3439fff6be. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-15046 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12988550/HDFS-15046.branch-2.9.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28506/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Backport HDFS-7060 to branch-2.10
> -
>
> Key: HDFS-15046
> URL: https://issues.apache.org/jira/browse/HDFS-15046
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-15046.branch-2.9.001.patch
>
>
> Not sure why it didn't get backported in 2.x before, but looks like a good 
> improvement overall.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15016) RBF: getDatanodeReport() should return the latest update

2019-12-11 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-15016:
---
Attachment: HDFS-15016.001.patch

> RBF: getDatanodeReport() should return the latest update
> 
>
> Key: HDFS-15016
> URL: https://issues.apache.org/jira/browse/HDFS-15016
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-15016.000.patch, HDFS-15016.001.patch
>
>
> Currently, when the Router calls getDatanodeReport() (or 
> getDatanodeStorageReport()) and the DN is in multiple clusters, it just takes 
> the one that comes first. It should consider the latest update.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15046) Backport HDFS-7060 to branch-2.10

2019-12-11 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993792#comment-16993792
 ] 

Wei-Chiu Chuang commented on HDFS-15046:


LGTM pending Jenkins (re-triggered it)

> Backport HDFS-7060 to branch-2.10
> -
>
> Key: HDFS-15046
> URL: https://issues.apache.org/jira/browse/HDFS-15046
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-15046.branch-2.9.001.patch
>
>
> Not sure why it didn't get backported in 2.x before, but looks like a good 
> improvement overall.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14743) Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

2019-12-11 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993786#comment-16993786
 ] 

Wei-Chiu Chuang commented on HDFS-14743:


I've been working on this for a little while. Will post a design doc and a PoC 
shortly.

> Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to 
> support Authorization of mkdir, rm, rmdir, copy, move etc...
> ---
>
> Key: HDFS-14743
> URL: https://issues.apache.org/jira/browse/HDFS-14743
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0
>Reporter: Ramesh Mani
>Assignee: Wei-Chiu Chuang
>Priority: Critical
>
> Enhance INodeAttributeProvider / AccessControlEnforcer Interface in HDFS to 
> support Authorization of mkdir, rm, rmdir, copy, move etc..., this should 
> help the implementors of the interface like Apache Ranger's HDFS 
> Authorization plugin to authorize and audit those command sets.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14743) Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc...

2019-12-11 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-14743:
--

Assignee: Wei-Chiu Chuang

> Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to 
> support Authorization of mkdir, rm, rmdir, copy, move etc...
> ---
>
> Key: HDFS-14743
> URL: https://issues.apache.org/jira/browse/HDFS-14743
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0
>Reporter: Ramesh Mani
>Assignee: Wei-Chiu Chuang
>Priority: Critical
>
> Enhance INodeAttributeProvider / AccessControlEnforcer Interface in HDFS to 
> support Authorization of mkdir, rm, rmdir, copy, move etc..., this should 
> help the implementors of the interface like Apache Ranger's HDFS 
> Authorization plugin to authorize and audit those command sets.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15047) Document the new decommission monitor (HDFS-14854)

2019-12-11 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-15047:

Status: Patch Available  (was: Open)

> Document the new decommission monitor (HDFS-14854)
> --
>
> Key: HDFS-15047
> URL: https://issues.apache.org/jira/browse/HDFS-15047
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: Masatake Iwasaki
>Priority: Major
>
> We can document HDFS-14854, add it to 
> https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDataNodeAdminGuide.html
>  and mark it as an experimental feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15032) Balancer crashes when it fails to contact an unavailable NN via ObserverReadProxyProvider

2019-12-11 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993769#comment-16993769
 ] 

Hudson commented on HDFS-15032:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17752 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17752/])
HDFS-15032. Properly handle InvocationTargetExceptions in the proxy (xkrogen: 
rev c174d50b30abc08a4642614fb35165e79792608b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerService.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProxyCombiner.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithHANameNodes.java


> Balancer crashes when it fails to contact an unavailable NN via 
> ObserverReadProxyProvider
> -
>
> Key: HDFS-15032
> URL: https://issues.apache.org/jira/browse/HDFS-15032
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 2.10.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HDFS-15032.000.patch, HDFS-15032.001.patch, 
> HDFS-15032.002.patch, HDFS-15032.003.patch, HDFS-15032.004.patch, 
> HDFS-15032.005.patch, debugger_with_tostring.png, 
> debugger_without_tostring.png
>
>
> When trying to run the Balancer using ObserverReadProxyProvider (to allow it 
> to read from the Observer Node as described in HDFS-14979), if one of the NNs 
> isn't running, the Balancer will crash.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15032) Balancer crashes when it fails to contact an unavailable NN via ObserverReadProxyProvider

2019-12-11 Thread Erik Krogen (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-15032:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Just committed this to trunk ~ branch-2.10. Thanks for the reviews [~shv]!

> Balancer crashes when it fails to contact an unavailable NN via 
> ObserverReadProxyProvider
> -
>
> Key: HDFS-15032
> URL: https://issues.apache.org/jira/browse/HDFS-15032
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 2.10.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HDFS-15032.000.patch, HDFS-15032.001.patch, 
> HDFS-15032.002.patch, HDFS-15032.003.patch, HDFS-15032.004.patch, 
> HDFS-15032.005.patch, debugger_with_tostring.png, 
> debugger_without_tostring.png
>
>
> When trying to run the Balancer using ObserverReadProxyProvider (to allow it 
> to read from the Observer Node as described in HDFS-14979), if one of the NNs 
> isn't running, the Balancer will crash.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15032) Balancer crashes when it fails to contact an unavailable NN via ObserverReadProxyProvider

2019-12-11 Thread Erik Krogen (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-15032:
---
Fix Version/s: 2.10.1
   3.2.2
   3.1.4
   3.3.0

> Balancer crashes when it fails to contact an unavailable NN via 
> ObserverReadProxyProvider
> -
>
> Key: HDFS-15032
> URL: https://issues.apache.org/jira/browse/HDFS-15032
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 2.10.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HDFS-15032.000.patch, HDFS-15032.001.patch, 
> HDFS-15032.002.patch, HDFS-15032.003.patch, HDFS-15032.004.patch, 
> HDFS-15032.005.patch, debugger_with_tostring.png, 
> debugger_without_tostring.png
>
>
> When trying to run the Balancer using ObserverReadProxyProvider (to allow it 
> to read from the Observer Node as described in HDFS-14979), if one of the NNs 
> isn't running, the Balancer will crash.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15052) WebHDFS getTrashRoot leads to OOM due to FileSystem object creation

2019-12-11 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HDFS-15052:
--

 Summary: WebHDFS getTrashRoot leads to OOM due to FileSystem 
object creation
 Key: HDFS-15052
 URL: https://issues.apache.org/jira/browse/HDFS-15052
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.0.0-alpha2, 2.9.0
Reporter: Wei-Chiu Chuang


Quoting [~daryn] in HDFS-10756 :
{quote}Surprised nobody has discovered this will lead to an inevitable OOM in 
the NN. The NN should not be creating filesystems to itself, and must never 
create filesystems in a remote user's context or the cache will explode.
{quote}
I guess the problem lies in side NamenodeWebHdfsMethods#getTrashRoot
{code:java}
  private static String getTrashRoot(String fullPath,
  Configuration conf) throws IOException {
FileSystem fs = FileSystem.get(conf != null ? conf : new 
Configuration());
return fs.getTrashRoot(
new org.apache.hadoop.fs.Path(fullPath)).toUri().getPath();
  }
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14519) NameQuota is not update after concat operation, so namequota is wrong

2019-12-11 Thread Erik Krogen (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993718#comment-16993718
 ] 

Erik Krogen commented on HDFS-14519:


Hi [~ayushtkn], just want to check if you still plan to work on this backport?

> NameQuota is not update after concat operation, so namequota is wrong
> -
>
> Key: HDFS-14519
> URL: https://issues.apache.org/jira/browse/HDFS-14519
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14519.001.patch, HDFS-14519.002.patch, 
> HDFS-14519.003.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15050) Optimize log information when DFSInputStream meet CannotObtainBlockLengthException

2019-12-11 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993695#comment-16993695
 ] 

Wei-Chiu Chuang commented on HDFS-15050:


+1

> Optimize log information when DFSInputStream meet 
> CannotObtainBlockLengthException
> --
>
> Key: HDFS-15050
> URL: https://issues.apache.org/jira/browse/HDFS-15050
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-15050.001.patch
>
>
> We could not identify which file it belongs easily when DFSInputStream meet 
> CannotObtainBlockLengthException, as the following exception log. Just 
> suggest to log file path string when we meet CannotObtainBlockLengthException.
> {code:java}
> Caused by: java.io.IOException: Cannot obtain block length for 
> LocatedBlock{BP-***:blk_***_***; getBlockSize()=690504; corrupt=false; 
> offset=1811939328; 
> locs=[DatanodeInfoWithStorage[*:50010,DS-2bcadcc4-458a-45c6-a91b-8461bf7cdd71,DISK],
>  
> DatanodeInfoWithStorage[*:50010,DS-8f2bb259-ecb2-4839-8769-4a0523360d58,DISK],
>  
> DatanodeInfoWithStorage[*:50010,DS-69f4de6f-2428-42ff-9486-98c2544b1ada,DISK]]}
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:402)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:345)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:280)
>   at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:272)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1664)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:304)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:300)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:300)
>   at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:161)
>   at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.open(ChRootedFileSystem.java:266)
>   at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.open(ViewFileSystem.java:481)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:828)
>   at 
> org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:109)
>   at 
> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:65)
>   ... 16 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15051) Propose to revoke WRITE MountTableEntry privilege to super user only

2019-12-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993632#comment-16993632
 ] 

Hadoop QA commented on HDFS-15051:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 58s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterRpc |
|   | hadoop.hdfs.server.federation.router.TestRouterAdminCLI |
|   | hadoop.hdfs.server.federation.router.TestRouterFaultTolerant |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-15051 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12988545/HDFS-15051.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ae6f8c443447 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c2e9783 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28504/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28504/testReport/ |
| Max. process+thread count | 

[jira] [Commented] (HDFS-14908) LeaseManager should check parent-child relationship when filter open files.

2019-12-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993630#comment-16993630
 ] 

Hadoop QA commented on HDFS-14908:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
50s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
36s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.TestDeadNodeDetection |
|   | hadoop.hdfs.server.namenode.TestFsck |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14908 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12988533/HDFS-14908.010.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0c2c6a0d74ad 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c2e9783 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28501/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28501/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-15046) Backport HDFS-7060 to branch-2.10

2019-12-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993614#comment-16993614
 ] 

Hadoop QA commented on HDFS-15046:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  8m 
31s{color} | {color:red} Docker failed to build yetus/hadoop:c3439fff6be. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-15046 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12988550/HDFS-15046.branch-2.9.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28505/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Backport HDFS-7060 to branch-2.10
> -
>
> Key: HDFS-15046
> URL: https://issues.apache.org/jira/browse/HDFS-15046
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-15046.branch-2.9.001.patch
>
>
> Not sure why it didn't get backported in 2.x before, but looks like a good 
> improvement overall.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15046) Backport HDFS-7060 to branch-2.10

2019-12-11 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-15046:
---
Attachment: HDFS-15046.branch-2.9.001.patch
Status: Patch Available  (was: Open)

> Backport HDFS-7060 to branch-2.10
> -
>
> Key: HDFS-15046
> URL: https://issues.apache.org/jira/browse/HDFS-15046
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-15046.branch-2.9.001.patch
>
>
> Not sure why it didn't get backported in 2.x before, but looks like a good 
> improvement overall.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15050) Optimize log information when DFSInputStream meet CannotObtainBlockLengthException

2019-12-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993582#comment-16993582
 ] 

Hadoop QA commented on HDFS-15050:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
55s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-15050 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12988539/HDFS-15050.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8fede6446889 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c2e9783 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28503/testReport/ |
| Max. process+thread count | 420 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28503/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Optimize log information when 

[jira] [Assigned] (HDFS-15046) Backport HDFS-7060 to branch-2.10

2019-12-11 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun reassigned HDFS-15046:
--

Assignee: Lisheng Sun

> Backport HDFS-7060 to branch-2.10
> -
>
> Key: HDFS-15046
> URL: https://issues.apache.org/jira/browse/HDFS-15046
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
>
> Not sure why it didn't get backported in 2.x before, but looks like a good 
> improvement overall.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15051) Propose to revoke WRITE MountTableEntry privilege to super user only

2019-12-11 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-15051:
---
Attachment: HDFS-15051.001.patch
Status: Patch Available  (was: Open)

submit demo patch and revoke MountTableEntry operation privilege.

> Propose to revoke WRITE MountTableEntry privilege to super user only
> 
>
> Key: HDFS-15051
> URL: https://issues.apache.org/jira/browse/HDFS-15051
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-15051.001.patch
>
>
> The current permission checker of #MountTableStoreImpl is not very restrict. 
> In some case, any user could add/update/remove MountTableEntry without the 
> expected permission checking.
> The following code segment try to check permission when operate 
> MountTableEntry, however mountTable object is from Client/RouterAdmin 
> {{MountTable mountTable = request.getEntry();}}, and user could pass any mode 
> which could bypass the permission checker.
> {code:java}
>   public void checkPermission(MountTable mountTable, FsAction access)
>   throws AccessControlException {
> if (isSuperUser()) {
>   return;
> }
> FsPermission mode = mountTable.getMode();
> if (getUser().equals(mountTable.getOwnerName())
> && mode.getUserAction().implies(access)) {
>   return;
> }
> if (isMemberOfGroup(mountTable.getGroupName())
> && mode.getGroupAction().implies(access)) {
>   return;
> }
> if (!getUser().equals(mountTable.getOwnerName())
> && !isMemberOfGroup(mountTable.getGroupName())
> && mode.getOtherAction().implies(access)) {
>   return;
> }
> throw new AccessControlException(
> "Permission denied while accessing mount table "
> + mountTable.getSourcePath()
> + ": user " + getUser() + " does not have " + access.toString()
> + " permissions.");
>   }
> {code}
> I just propose revoke WRITE MountTableEntry privilege to super user only.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15049) Fix annotation of RouterWebServices

2019-12-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993573#comment-16993573
 ] 

Hadoop QA commented on HDFS-15049:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
55s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-15049 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12988537/HDFS-15049.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 76953dfe341c 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c2e9783 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28502/testReport/ |
| Max. process+thread count | 742 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28502/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |



[jira] [Created] (HDFS-15051) Propose to revoke WRITE MountTableEntry privilege to super user only

2019-12-11 Thread Xiaoqiao He (Jira)
Xiaoqiao He created HDFS-15051:
--

 Summary: Propose to revoke WRITE MountTableEntry privilege to 
super user only
 Key: HDFS-15051
 URL: https://issues.apache.org/jira/browse/HDFS-15051
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: rbf
Reporter: Xiaoqiao He
Assignee: Xiaoqiao He


The current permission checker of #MountTableStoreImpl is not very restrict. In 
some case, any user could add/update/remove MountTableEntry without the 
expected permission checking.
The following code segment try to check permission when operate 
MountTableEntry, however mountTable object is from Client/RouterAdmin 
{{MountTable mountTable = request.getEntry();}}, and user could pass any mode 
which could bypass the permission checker.
{code:java}
  public void checkPermission(MountTable mountTable, FsAction access)
  throws AccessControlException {
if (isSuperUser()) {
  return;
}

FsPermission mode = mountTable.getMode();
if (getUser().equals(mountTable.getOwnerName())
&& mode.getUserAction().implies(access)) {
  return;
}

if (isMemberOfGroup(mountTable.getGroupName())
&& mode.getGroupAction().implies(access)) {
  return;
}

if (!getUser().equals(mountTable.getOwnerName())
&& !isMemberOfGroup(mountTable.getGroupName())
&& mode.getOtherAction().implies(access)) {
  return;
}

throw new AccessControlException(
"Permission denied while accessing mount table "
+ mountTable.getSourcePath()
+ ": user " + getUser() + " does not have " + access.toString()
+ " permissions.");
  }
{code}

I just propose revoke WRITE MountTableEntry privilege to super user only.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15050) Optimize log information when DFSInputStream meet CannotObtainBlockLengthException

2019-12-11 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-15050:
---
Attachment: HDFS-15050.001.patch
Status: Patch Available  (was: Open)

submit init patch with minor exception changes.

> Optimize log information when DFSInputStream meet 
> CannotObtainBlockLengthException
> --
>
> Key: HDFS-15050
> URL: https://issues.apache.org/jira/browse/HDFS-15050
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-15050.001.patch
>
>
> We could not identify which file it belongs easily when DFSInputStream meet 
> CannotObtainBlockLengthException, as the following exception log. Just 
> suggest to log file path string when we meet CannotObtainBlockLengthException.
> {code:java}
> Caused by: java.io.IOException: Cannot obtain block length for 
> LocatedBlock{BP-***:blk_***_***; getBlockSize()=690504; corrupt=false; 
> offset=1811939328; 
> locs=[DatanodeInfoWithStorage[*:50010,DS-2bcadcc4-458a-45c6-a91b-8461bf7cdd71,DISK],
>  
> DatanodeInfoWithStorage[*:50010,DS-8f2bb259-ecb2-4839-8769-4a0523360d58,DISK],
>  
> DatanodeInfoWithStorage[*:50010,DS-69f4de6f-2428-42ff-9486-98c2544b1ada,DISK]]}
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:402)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:345)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:280)
>   at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:272)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1664)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:304)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:300)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:300)
>   at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:161)
>   at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.open(ChRootedFileSystem.java:266)
>   at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.open(ViewFileSystem.java:481)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:828)
>   at 
> org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:109)
>   at 
> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:65)
>   ... 16 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15050) Optimize log information when DFSInputStream meet CannotObtainBlockLengthException

2019-12-11 Thread Xiaoqiao He (Jira)
Xiaoqiao He created HDFS-15050:
--

 Summary: Optimize log information when DFSInputStream meet 
CannotObtainBlockLengthException
 Key: HDFS-15050
 URL: https://issues.apache.org/jira/browse/HDFS-15050
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: dfsclient
Reporter: Xiaoqiao He
Assignee: Xiaoqiao He


We could not identify which file it belongs easily when DFSInputStream meet 
CannotObtainBlockLengthException, as the following exception log. Just suggest 
to log file path string when we meet CannotObtainBlockLengthException.
{code:java}
Caused by: java.io.IOException: Cannot obtain block length for 
LocatedBlock{BP-***:blk_***_***; getBlockSize()=690504; corrupt=false; 
offset=1811939328; 
locs=[DatanodeInfoWithStorage[*:50010,DS-2bcadcc4-458a-45c6-a91b-8461bf7cdd71,DISK],
 DatanodeInfoWithStorage[*:50010,DS-8f2bb259-ecb2-4839-8769-4a0523360d58,DISK], 
DatanodeInfoWithStorage[*:50010,DS-69f4de6f-2428-42ff-9486-98c2544b1ada,DISK]]}
at 
org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:402)
at 
org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:345)
at 
org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:280)
at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:272)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1664)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:304)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:300)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:300)
at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:161)
at 
org.apache.hadoop.fs.viewfs.ChRootedFileSystem.open(ChRootedFileSystem.java:266)
at 
org.apache.hadoop.fs.viewfs.ViewFileSystem.open(ViewFileSystem.java:481)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:828)
at 
org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:109)
at 
org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at 
org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:65)
... 16 more
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15049) Fix annotation of RouterWebServices

2019-12-11 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-15049:
---
Attachment: HDFS-15049.001.patch
Status: Patch Available  (was: Open)

> Fix annotation of RouterWebServices
> ---
>
> Key: HDFS-15049
> URL: https://issues.apache.org/jira/browse/HDFS-15049
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Trivial
> Attachments: HDFS-15049.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15049) Fix annotation of RouterWebServices

2019-12-11 Thread Jinglun (Jira)
Jinglun created HDFS-15049:
--

 Summary: Fix annotation of RouterWebServices
 Key: HDFS-15049
 URL: https://issues.apache.org/jira/browse/HDFS-15049
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jinglun
Assignee: Jinglun






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14908) LeaseManager should check parent-child relationship when filter open files.

2019-12-11 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-14908:
---
Attachment: HDFS-14908.010.patch

> LeaseManager should check parent-child relationship when filter open files.
> ---
>
> Key: HDFS-14908
> URL: https://issues.apache.org/jira/browse/HDFS-14908
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.0.1
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
> Attachments: HDFS-14908.001.patch, HDFS-14908.002.patch, 
> HDFS-14908.003.patch, HDFS-14908.004.patch, HDFS-14908.005.patch, 
> HDFS-14908.006.patch, HDFS-14908.007.patch, HDFS-14908.008.patch, 
> HDFS-14908.009.patch, HDFS-14908.010.patch, HDFS-14908.TestV4.patch, 
> Test.java, TestV2.java, TestV3.java
>
>
> Now when doing listOpenFiles(), LeaseManager only checks whether the filter 
> path is the prefix of the open files. We should check whether the filter path 
> is the parent/ancestor of the open files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14908) LeaseManager should check parent-child relationship when filter open files.

2019-12-11 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993489#comment-16993489
 ] 

Jinglun commented on HDFS-14908:


Upload v10 fixing checkstyle.

> LeaseManager should check parent-child relationship when filter open files.
> ---
>
> Key: HDFS-14908
> URL: https://issues.apache.org/jira/browse/HDFS-14908
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.0.1
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
> Attachments: HDFS-14908.001.patch, HDFS-14908.002.patch, 
> HDFS-14908.003.patch, HDFS-14908.004.patch, HDFS-14908.005.patch, 
> HDFS-14908.006.patch, HDFS-14908.007.patch, HDFS-14908.008.patch, 
> HDFS-14908.009.patch, HDFS-14908.010.patch, HDFS-14908.TestV4.patch, 
> Test.java, TestV2.java, TestV3.java
>
>
> Now when doing listOpenFiles(), LeaseManager only checks whether the filter 
> path is the prefix of the open files. We should check whether the filter path 
> is the parent/ancestor of the open files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14983) RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration command option

2019-12-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993450#comment-16993450
 ] 

Hadoop QA commented on HDFS-14983:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
49s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
44s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  3m 59s{color} | 
{color:red} hadoop-hdfs-project generated 3 new + 16 unchanged - 3 fixed = 19 
total (was 19) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 25s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}193m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.TestDeadNodeDetection |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
|   | hadoop.hdfs.server.federation.router.TestRouterFaultTolerant |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14983 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12988497/HDFS-14983.006.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux ccbe3452cb0f 

[jira] [Commented] (HDFS-15036) Active NameNode should not silently fail the image transfer

2019-12-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993325#comment-16993325
 ] 

Hadoop QA commented on HDFS-15036:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
49s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 1s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
49s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 88 unchanged - 0 fixed = 89 total (was 88) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 35s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}180m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestFsck |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-15036 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12988486/HDFS-15036.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 32b29ff6bfad 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c2e9783 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28499/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28499/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Commented] (HDFS-14476) lock too long when fix inconsistent blocks between disk and in-memory

2019-12-11 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993324#comment-16993324
 ] 

Takanobu Asanuma commented on HDFS-14476:
-

The patch may generate a findbug. Please see HDFS-15048.

> lock too long when fix inconsistent blocks between disk and in-memory
> -
>
> Key: HDFS-14476
> URL: https://issues.apache.org/jira/browse/HDFS-14476
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.6.0, 2.7.0, 3.0.3
>Reporter: Sean Chow
>Assignee: Sean Chow
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14476-branch-2.01.patch, HDFS-14476.00.patch, 
> HDFS-14476.002.patch, HDFS-14476.01.patch, HDFS-14476.branch-3.2.001.patch, 
> datanode-with-patch-14476.png
>
>
> When directoryScanner have the results of differences between disk and 
> in-memory blocks. it will try to run {{checkAndUpdate}} to fix it. However 
> {{FsDatasetImpl.checkAndUpdate}} is a synchronized call
> As I have about 6millions blocks for every datanodes and every 6hours' scan 
> will have about 25000 abnormal blocks to fix. That leads to a long lock 
> holding FsDatasetImpl object.
> let's assume every block need 10ms to fix(because of latency of SAS disk), 
> that will cost 250 seconds to finish. That means all reads and writes will be 
> blocked for 3mins for that datanode.
>  
> {code:java}
> 2019-05-06 08:06:51,704 INFO 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
> BP-1644920766-10.223.143.220-1450099987967 Total blocks: 6850197, missing 
> metadata files:23574, missing block files:23574, missing blocks in 
> memory:47625, mismatched blocks:0
> ...
> 2019-05-06 08:16:41,625 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Took 588402ms to process 1 commands from NN
> {code}
> Take long time to process command from nn because threads are blocked. And 
> namenode will see long lastContact time for this datanode.
> Maybe this affect all hdfs versions.
> *how to fix:*
> just like process invalidate command from namenode with 1000 batch size, fix 
> these abnormal block should be handled with batch too and sleep 2 seconds 
> between the batch to allow normal reading/writing blocks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14983) RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration command option

2019-12-11 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993320#comment-16993320
 ] 

Takanobu Asanuma commented on HDFS-14983:
-

Created HDFS-15048 for the findbug.

> RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration command option
> ---
>
> Key: HDFS-14983
> URL: https://issues.apache.org/jira/browse/HDFS-14983
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Minor
> Attachments: HDFS-14983.002.patch, HDFS-14983.003.patch, 
> HDFS-14983.004.patch, HDFS-14983.005.patch, HDFS-14983.006.patch, 
> HDFS-14983.draft.001.patch
>
>
> NameNode can update proxyuser config by -refreshSuperUserGroupsConfiguration 
> without restarting but DFSRouter cannot. It would be better for DFSRouter to 
> have such functionality to be compatible with NameNode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15048) Fix findbug in DirectoryScanner

2019-12-11 Thread Takanobu Asanuma (Jira)
Takanobu Asanuma created HDFS-15048:
---

 Summary: Fix findbug in DirectoryScanner
 Key: HDFS-15048
 URL: https://issues.apache.org/jira/browse/HDFS-15048
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Takanobu Asanuma


There is a findbug in DirectoryScanner.

{noformat}
Multithreaded correctness Warnings

org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile() calls 
Thread.sleep() with a lock held
Bug type SWL_SLEEP_WITH_LOCK_HELD (click for details) 
In class org.apache.hadoop.hdfs.server.datanode.DirectoryScanner
In method org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile()
At DirectoryScanner.java:[line 441]
{noformat}

https://builds.apache.org/job/PreCommit-HDFS-Build/28498/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14983) RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration command option

2019-12-11 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993305#comment-16993305
 ] 

Takanobu Asanuma commented on HDFS-14983:
-

Thanks for updating the patch, [~risyomei] .
 * Seems cc warnings have existed since before. They are not related to the 
patch.
 * The findbugs and the failed tests seem not to be related too. HDFS-15038 is 
addressing the failed test of TestFsck.

+1 on [^HDFS-14983.006.patch].

[~elgoiri] Could you review it again?

> RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration command option
> ---
>
> Key: HDFS-14983
> URL: https://issues.apache.org/jira/browse/HDFS-14983
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Minor
> Attachments: HDFS-14983.002.patch, HDFS-14983.003.patch, 
> HDFS-14983.004.patch, HDFS-14983.005.patch, HDFS-14983.006.patch, 
> HDFS-14983.draft.001.patch
>
>
> NameNode can update proxyuser config by -refreshSuperUserGroupsConfiguration 
> without restarting but DFSRouter cannot. It would be better for DFSRouter to 
> have such functionality to be compatible with NameNode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14997) BPServiceActor process command from NameNode asynchronously

2019-12-11 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993275#comment-16993275
 ] 

Xiaoqiao He commented on HDFS-14997:


Thanks [~weichiu],
{quote}Question: I can imagine DNA_INVALIDATE taking a long time to process. Do 
we already have metrics or log messages when the block invalidation command 
taking too long?
{quote}
It looks no metrics or log to trace performance of IO operation (include some 
very long lock waiting times) in trunk according to my observation. I add them 
in my internal version. I also suggest to add them and next JIRA will work it 
if necessary. Thanks.

> BPServiceActor process command from NameNode asynchronously
> ---
>
> Key: HDFS-14997
> URL: https://issues.apache.org/jira/browse/HDFS-14997
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-14997.001.patch, HDFS-14997.002.patch, 
> HDFS-14997.003.patch, HDFS-14997.004.patch, HDFS-14997.005.patch
>
>
> There are two core functions, report(#sendHeartbeat, #blockReport, 
> #cacheReport) and #processCommand in #BPServiceActor main process flow. If 
> processCommand cost long time it will block send report flow. Meanwhile 
> processCommand could cost long time(over 1000s the worst case I meet) when IO 
> load  of DataNode is very high. Since some IO operations are under 
> #datasetLock, So it has to wait to acquire #datasetLock long time when 
> process some of commands(such as #DNA_INVALIDATE). In such case, #heartbeat 
> will not send to NameNode in-time, and trigger other disasters.
> I propose to improve #processCommand asynchronously and not block 
> #BPServiceActor to send heartbeat back to NameNode when meet high IO load.
> Notes:
> 1. Lifeline could be one effective solution, however some old branches are 
> not support this feature.
> 2. IO operations under #datasetLock is another issue, I think we should solve 
> it at another JIRA.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org