[jira] [Updated] (HDFS-14937) [SBN read] ObserverReadProxyProvider should throw InterruptException

2019-12-28 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14937:

Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> [SBN read] ObserverReadProxyProvider should throw InterruptException
> 
>
> Key: HDFS-14937
> URL: https://issues.apache.org/jira/browse/HDFS-14937
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14937-trunk-001.patch, HDFS-14937-trunk-002.patch
>
>
> ObserverReadProxyProvider should throw InterruptException immediately if one 
> Observer catch InterruptException in invoking.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14937) [SBN read] ObserverReadProxyProvider should throw InterruptException

2019-12-28 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004687#comment-17004687
 ] 

Ayush Saxena commented on HDFS-14937:
-

Committed to trunk.
Thanx [~xuzq_zander] for the contribution and [~vagarychen] for the review!!!

> [SBN read] ObserverReadProxyProvider should throw InterruptException
> 
>
> Key: HDFS-14937
> URL: https://issues.apache.org/jira/browse/HDFS-14937
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14937-trunk-001.patch, HDFS-14937-trunk-002.patch
>
>
> ObserverReadProxyProvider should throw InterruptException immediately if one 
> Observer catch InterruptException in invoking.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15051) RBF: Propose to revoke WRITE MountTableEntry privilege to super user only

2019-12-28 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004685#comment-17004685
 ] 

Ayush Saxena commented on HDFS-15051:
-

Is only {{WRITE}} permission required for the parent?
Give a check what is the scenario with mkdirs(), what all permissions are 
checked, we should keep same as mkdir IMO.

> RBF: Propose to revoke WRITE MountTableEntry privilege to super user only
> -
>
> Key: HDFS-15051
> URL: https://issues.apache.org/jira/browse/HDFS-15051
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-15051.001.patch, HDFS-15051.002.patch, 
> HDFS-15051.003.patch, HDFS-15051.004.patch, HDFS-15051.005.patch, 
> HDFS-15051.006.patch
>
>
> The current permission checker of #MountTableStoreImpl is not very restrict. 
> In some case, any user could add/update/remove MountTableEntry without the 
> expected permission checking.
> The following code segment try to check permission when operate 
> MountTableEntry, however mountTable object is from Client/RouterAdmin 
> {{MountTable mountTable = request.getEntry();}}, and user could pass any mode 
> which could bypass the permission checker.
> {code:java}
>   public void checkPermission(MountTable mountTable, FsAction access)
>   throws AccessControlException {
> if (isSuperUser()) {
>   return;
> }
> FsPermission mode = mountTable.getMode();
> if (getUser().equals(mountTable.getOwnerName())
> && mode.getUserAction().implies(access)) {
>   return;
> }
> if (isMemberOfGroup(mountTable.getGroupName())
> && mode.getGroupAction().implies(access)) {
>   return;
> }
> if (!getUser().equals(mountTable.getOwnerName())
> && !isMemberOfGroup(mountTable.getGroupName())
> && mode.getOtherAction().implies(access)) {
>   return;
> }
> throw new AccessControlException(
> "Permission denied while accessing mount table "
> + mountTable.getSourcePath()
> + ": user " + getUser() + " does not have " + access.toString()
> + " permissions.");
>   }
> {code}
> I just propose revoke WRITE MountTableEntry privilege to super user only.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15082) RBF: Check each component length of destination path when add/update mount entry

2019-12-28 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004682#comment-17004682
 ] 

Ayush Saxena commented on HDFS-15082:
-

is it similar to HDFS-13576?
Regarding the patch :
* Are we having a separate configuration at the Router to specify the path 
length independent of the namespace?
If so, I don't think it makes sense.
* The default should be 0, to disable this by default to ensure compatibility?
* What is the use case for having this, or say advantage of having this?

> RBF: Check each component length of destination path when add/update mount 
> entry
> 
>
> Key: HDFS-15082
> URL: https://issues.apache.org/jira/browse/HDFS-15082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-15082.001.patch
>
>
> When add/update mount entry, each component length of destination path could 
> exceed filesystem path component length limit, reference to 
> `dfs.namenode.fs-limits.max-component-length` of NameNode. So we should check 
> each component length of destination path when add/update mount entry at 
> Router side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15082) RBF: Check each component length of destination path when add/update mount entry

2019-12-28 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-15082:
---
Attachment: HDFS-15082.001.patch
Status: Patch Available  (was: Open)

submit patch v001 and pending Jenkins.

> RBF: Check each component length of destination path when add/update mount 
> entry
> 
>
> Key: HDFS-15082
> URL: https://issues.apache.org/jira/browse/HDFS-15082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-15082.001.patch
>
>
> When add/update mount entry, each component length of destination path could 
> exceed filesystem path component length limit, reference to 
> `dfs.namenode.fs-limits.max-component-length` of NameNode. So we should check 
> each component length of destination path when add/update mount entry at 
> Router side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15082) RBF: Check each component length of destination path when add/update mount entry

2019-12-28 Thread Xiaoqiao He (Jira)
Xiaoqiao He created HDFS-15082:
--

 Summary: RBF: Check each component length of destination path when 
add/update mount entry
 Key: HDFS-15082
 URL: https://issues.apache.org/jira/browse/HDFS-15082
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: rbf
Reporter: Xiaoqiao He
Assignee: Xiaoqiao He


When add/update mount entry, each component length of destination path could 
exceed filesystem path component length limit, reference to 
`dfs.namenode.fs-limits.max-component-length` of NameNode. So we should check 
each component length of destination path when add/update mount entry at Router 
side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15051) RBF: Propose to revoke WRITE MountTableEntry privilege to super user only

2019-12-28 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004668#comment-17004668
 ] 

Xiaoqiao He commented on HDFS-15051:


v006 rebase trunk and fix checkstyle.

> RBF: Propose to revoke WRITE MountTableEntry privilege to super user only
> -
>
> Key: HDFS-15051
> URL: https://issues.apache.org/jira/browse/HDFS-15051
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-15051.001.patch, HDFS-15051.002.patch, 
> HDFS-15051.003.patch, HDFS-15051.004.patch, HDFS-15051.005.patch, 
> HDFS-15051.006.patch
>
>
> The current permission checker of #MountTableStoreImpl is not very restrict. 
> In some case, any user could add/update/remove MountTableEntry without the 
> expected permission checking.
> The following code segment try to check permission when operate 
> MountTableEntry, however mountTable object is from Client/RouterAdmin 
> {{MountTable mountTable = request.getEntry();}}, and user could pass any mode 
> which could bypass the permission checker.
> {code:java}
>   public void checkPermission(MountTable mountTable, FsAction access)
>   throws AccessControlException {
> if (isSuperUser()) {
>   return;
> }
> FsPermission mode = mountTable.getMode();
> if (getUser().equals(mountTable.getOwnerName())
> && mode.getUserAction().implies(access)) {
>   return;
> }
> if (isMemberOfGroup(mountTable.getGroupName())
> && mode.getGroupAction().implies(access)) {
>   return;
> }
> if (!getUser().equals(mountTable.getOwnerName())
> && !isMemberOfGroup(mountTable.getGroupName())
> && mode.getOtherAction().implies(access)) {
>   return;
> }
> throw new AccessControlException(
> "Permission denied while accessing mount table "
> + mountTable.getSourcePath()
> + ": user " + getUser() + " does not have " + access.toString()
> + " permissions.");
>   }
> {code}
> I just propose revoke WRITE MountTableEntry privilege to super user only.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15051) RBF: Propose to revoke WRITE MountTableEntry privilege to super user only

2019-12-28 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-15051:
---
Attachment: HDFS-15051.006.patch

> RBF: Propose to revoke WRITE MountTableEntry privilege to super user only
> -
>
> Key: HDFS-15051
> URL: https://issues.apache.org/jira/browse/HDFS-15051
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-15051.001.patch, HDFS-15051.002.patch, 
> HDFS-15051.003.patch, HDFS-15051.004.patch, HDFS-15051.005.patch, 
> HDFS-15051.006.patch
>
>
> The current permission checker of #MountTableStoreImpl is not very restrict. 
> In some case, any user could add/update/remove MountTableEntry without the 
> expected permission checking.
> The following code segment try to check permission when operate 
> MountTableEntry, however mountTable object is from Client/RouterAdmin 
> {{MountTable mountTable = request.getEntry();}}, and user could pass any mode 
> which could bypass the permission checker.
> {code:java}
>   public void checkPermission(MountTable mountTable, FsAction access)
>   throws AccessControlException {
> if (isSuperUser()) {
>   return;
> }
> FsPermission mode = mountTable.getMode();
> if (getUser().equals(mountTable.getOwnerName())
> && mode.getUserAction().implies(access)) {
>   return;
> }
> if (isMemberOfGroup(mountTable.getGroupName())
> && mode.getGroupAction().implies(access)) {
>   return;
> }
> if (!getUser().equals(mountTable.getOwnerName())
> && !isMemberOfGroup(mountTable.getGroupName())
> && mode.getOtherAction().implies(access)) {
>   return;
> }
> throw new AccessControlException(
> "Permission denied while accessing mount table "
> + mountTable.getSourcePath()
> + ": user " + getUser() + " does not have " + access.toString()
> + " permissions.");
>   }
> {code}
> I just propose revoke WRITE MountTableEntry privilege to super user only.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15075) Remove process command timing from BPServiceActor

2019-12-28 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004667#comment-17004667
 ] 

Xiaoqiao He commented on HDFS-15075:


update v003, fix failed unit tests and checkstyle.

> Remove process command timing from BPServiceActor
> -
>
> Key: HDFS-15075
> URL: https://issues.apache.org/jira/browse/HDFS-15075
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-15075.001.patch, HDFS-15075.002.patch, 
> HDFS-15075.003.patch
>
>
> HDFS-14997 moved the command processing into async.
> Right now, we are checking the time to add to a queue.
> We should remove this one and maybe move the timing within the thread.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15075) Remove process command timing from BPServiceActor

2019-12-28 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-15075:
---
Attachment: HDFS-15075.003.patch

> Remove process command timing from BPServiceActor
> -
>
> Key: HDFS-15075
> URL: https://issues.apache.org/jira/browse/HDFS-15075
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-15075.001.patch, HDFS-15075.002.patch, 
> HDFS-15075.003.patch
>
>
> HDFS-14997 moved the command processing into async.
> Right now, we are checking the time to add to a queue.
> We should remove this one and maybe move the timing within the thread.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14957) INodeReference Space Consumed was not same in QuotaUsage and ContentSummary

2019-12-28 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004659#comment-17004659
 ] 

Surendra Singh Lilhore commented on HDFS-14957:
---

[~hemanthboyina], sorry for late reply.

You need to re-base your test class.

> INodeReference Space Consumed was not same in QuotaUsage and ContentSummary
> ---
>
> Key: HDFS-14957
> URL: https://issues.apache.org/jira/browse/HDFS-14957
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.4
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14957.001.patch, HDFS-14957.002.patch, 
> HDFS-14957.JPG
>
>
> for INodeReferences , space consumed was different in QuotaUsage and Content 
> Summary 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15074) DataNode.DataTransfer thread should catch all the expception and log it.

2019-12-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004656#comment-17004656
 ] 

Hudson commented on HDFS-15074:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17800 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17800/])
HDFS-15074. DataNode.DataTransfer thread should catch all the expception 
(surendralilhore: rev ee51eadda01e02ac5759ca19756f6f961c8eb0cd)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


> DataNode.DataTransfer thread should catch all the expception and log it.
> 
>
> Key: HDFS-15074
> URL: https://issues.apache.org/jira/browse/HDFS-15074
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-15074.001.patch, HDFS-15074.002.patch
>
>
> Some time If this thread is throwing exception other than IOException, will 
> not be able to trash it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15074) DataNode.DataTransfer thread should catch all the expception and log it.

2019-12-28 Thread Surendra Singh Lilhore (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-15074:
--
Fix Version/s: 3.2.2
   3.1.4
   3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-3.2, branch-3.1

> DataNode.DataTransfer thread should catch all the expception and log it.
> 
>
> Key: HDFS-15074
> URL: https://issues.apache.org/jira/browse/HDFS-15074
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-15074.001.patch, HDFS-15074.002.patch
>
>
> Some time If this thread is throwing exception other than IOException, will 
> not be able to trash it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15063) HttpFS : getFileStatus doesn't return ecPolicy

2019-12-28 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004640#comment-17004640
 ] 

Takanobu Asanuma edited comment on HDFS-15063 at 12/29/19 5:12 AM:
---

Thanks for your explanation and updating the patch, [~hemanthboyina].

After applying [^HDFS-15063.002.patch], users can execute GETFILESTATUS of 
WEBHDFS REST API against HttpFS server. But it still fails when users use 
{{WebHdfsFileSystem#getFileStatus()}} against HttpFS server. I confirmed it 
with adding the following codes to the bottom of testECPolicy().
{code:java}
 WebHdfsFileSystem httpfsWebHdfs = (WebHdfsFileSystem) FileSystem.get(new 
URI("webhdfs://" + TestJettyHelper.getJettyURL().toURI().getAuthority()), 
TestHdfsHelper.getHdfsConf());
 HdfsFileStatus httpfsFileStatus = 
(HdfsFileStatus)httpfsWebHdfs.getFileStatus(ecFile);
 assertNotNull(httpfsFileStatus.getErasureCodingPolicy()); // This should 
succeed.
{code}
This is because the return value of {{FSOperations#toJsonInner}} doesn't have 
{{ecPolicyObj}} and {{JsonUtilClient#toFileStatus}} can't get the ecPolicy 
information. Could you check it?


was (Author: tasanuma0829):
Thanks for your explanation and updating the patch, [~hemanthboyina].

After applying [^HDFS-15063.002.patch], users can execute GETFILESTATUS of 
WEBHDFS REST AI against HttpFS server. But it still fails when users use 
{{WebHdfsFileSystem#getFileStatus()}} against HttpFS server. I confirmed it 
with adding the following codes to the bottom of testECPolicy().
{code:java}
 WebHdfsFileSystem httpfsWebHdfs = (WebHdfsFileSystem) FileSystem.get(new 
URI("webhdfs://" + TestJettyHelper.getJettyURL().toURI().getAuthority()), 
TestHdfsHelper.getHdfsConf());
 HdfsFileStatus httpfsFileStatus = 
(HdfsFileStatus)httpfsWebHdfs.getFileStatus(ecFile);
 assertNotNull(httpfsFileStatus.getErasureCodingPolicy()); // This should 
succeed.
{code}
This is because the return value of {{FSOperations#toJsonInner}} doesn't have 
{{ecPolicyObj}} and {{JsonUtilClient#toFileStatus}} can't get the ecPolicy 
information. Could you check it?

> HttpFS : getFileStatus doesn't return ecPolicy
> --
>
> Key: HDFS-15063
> URL: https://issues.apache.org/jira/browse/HDFS-15063
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15063.001.patch, HDFS-15063.002.patch
>
>
> Currently LISTSTATUS call to HttpFS returns a json. These jsonArray elements  
> have the ecPolicy name.
> But when HttpFsFileSystem converts it back into a FileStatus object, then 
> ecPolicy is not added



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15063) HttpFS : getFileStatus doesn't return ecPolicy

2019-12-28 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004640#comment-17004640
 ] 

Takanobu Asanuma commented on HDFS-15063:
-

Thanks for your explanation and updating the patch, [~hemanthboyina].

After applying [^HDFS-15063.002.patch], users can execute GETFILESTATUS of 
WEBHDFS REST AI against HttpFS server. But it still fails when users use 
{{WebHdfsFileSystem#getFileStatus()}} against HttpFS server. I confirmed it 
with adding the following codes to the bottom of testECPolicy().
{code:java}
 WebHdfsFileSystem httpfsWebHdfs = (WebHdfsFileSystem) FileSystem.get(new 
URI("webhdfs://" + TestJettyHelper.getJettyURL().toURI().getAuthority()), 
TestHdfsHelper.getHdfsConf());
 HdfsFileStatus httpfsFileStatus = 
(HdfsFileStatus)httpfsWebHdfs.getFileStatus(ecFile);
 assertNotNull(httpfsFileStatus.getErasureCodingPolicy()); // This should 
succeed.
{code}
This is because the return value of {{FSOperations#toJsonInner}} doesn't have 
{{ecPolicyObj}} and {{JsonUtilClient#toFileStatus}} can't get the ecPolicy 
information. Could you check it?

> HttpFS : getFileStatus doesn't return ecPolicy
> --
>
> Key: HDFS-15063
> URL: https://issues.apache.org/jira/browse/HDFS-15063
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15063.001.patch, HDFS-15063.002.patch
>
>
> Currently LISTSTATUS call to HttpFS returns a json. These jsonArray elements  
> have the ecPolicy name.
> But when HttpFsFileSystem converts it back into a FileStatus object, then 
> ecPolicy is not added



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14546) Document block placement policies

2019-12-28 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004637#comment-17004637
 ] 

Ayush Saxena commented on HDFS-14546:
-

There is no PR now, I remember the PR had some comments from [~weichiu], Are 
they addressed? 

[~weichiu] If you have time, can you give a check once.

> Document block placement policies
> -
>
> Key: HDFS-14546
> URL: https://issues.apache.org/jira/browse/HDFS-14546
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Amithsha
>Priority: Major
>  Labels: documentation
> Attachments: HDFS-14546-01.patch, HDFS-14546-02.patch, 
> HDFS-14546-03.patch, HDFS-14546-04.patch, HDFS-14546-05.patch, 
> HDFS-14546-06.patch, HDFS-14546-07.patch, HDFS-14546-08.patch, 
> HdfsDesign.patch
>
>
> Currently, all the documentation refers to the default block placement policy.
> However, over time there have been new policies:
> * BlockPlacementPolicyRackFaultTolerant (HDFS-7891)
> * BlockPlacementPolicyWithNodeGroup (HDFS-3601)
> * BlockPlacementPolicyWithUpgradeDomain (HDFS-9006)
> We should update the documentation to refer to them explaining their 
> particularities and probably how to setup each one of them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14934) [SBN Read] Standby NN throws many InterruptedExceptions when dfs.ha.tail-edits.period is 0

2019-12-28 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004495#comment-17004495
 ] 

Ayush Saxena commented on HDFS-14934:
-

Thanx [~tasanuma]  for the review and commit

> [SBN Read] Standby NN throws many InterruptedExceptions when 
> dfs.ha.tail-edits.period is 0
> --
>
> Key: HDFS-14934
> URL: https://issues.apache.org/jira/browse/HDFS-14934
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Takanobu Asanuma
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HDFS-14934-01.patch
>
>
> When dfs.ha.tail-edits.period is 0ms (or very short-time), there are many 
> warn logs in standby NN.
> {noformat}
> 2019-10-25 16:25:46,945 [Logger channel (from parallel executor) to  hostname>/:] WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(55)) - Thread 
> (Thread[Logger channel (from parallel executor) to / address>:,5,main]) interrupted: 
> java.lang.InterruptedException
>   at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:509)
>   at 
> com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:82)
>   at 
> org.apache.hadoop.util.concurrent.ExecutorHelper.logThrowableFromAfterExecute(ExecutorHelper.java:48)
>   at 
> org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor.afterExecute(HadoopThreadPoolExecutor.java:90)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1157)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14934) [SBN Read] Standby NN throws many InterruptedExceptions when dfs.ha.tail-edits.period is 0

2019-12-28 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14934:

Fix Version/s: 2.10.1
   3.2.2
   3.1.4
   3.3.0
 Assignee: Ayush Saxena
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-3.2 and branch-3.1. And committed to branch-2.10 
with fixing a minor conflict. Thanks for your contribution, [~ayushtkn]!

> [SBN Read] Standby NN throws many InterruptedExceptions when 
> dfs.ha.tail-edits.period is 0
> --
>
> Key: HDFS-14934
> URL: https://issues.apache.org/jira/browse/HDFS-14934
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Takanobu Asanuma
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HDFS-14934-01.patch
>
>
> When dfs.ha.tail-edits.period is 0ms (or very short-time), there are many 
> warn logs in standby NN.
> {noformat}
> 2019-10-25 16:25:46,945 [Logger channel (from parallel executor) to  hostname>/:] WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(55)) - Thread 
> (Thread[Logger channel (from parallel executor) to / address>:,5,main]) interrupted: 
> java.lang.InterruptedException
>   at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:509)
>   at 
> com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:82)
>   at 
> org.apache.hadoop.util.concurrent.ExecutorHelper.logThrowableFromAfterExecute(ExecutorHelper.java:48)
>   at 
> org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor.afterExecute(HadoopThreadPoolExecutor.java:90)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1157)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14934) [SBN Read] Standby NN throws many InterruptedExceptions when dfs.ha.tail-edits.period is 0

2019-12-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004477#comment-17004477
 ] 

Hudson commented on HDFS-14934:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17799 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17799/])
HDFS-14934. [SBN Read] Standby NN throws many InterruptedExceptions when 
(tasanuma: rev dc32f583afffc372f78fb45211c3e7ce13f6a4be)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java


> [SBN Read] Standby NN throws many InterruptedExceptions when 
> dfs.ha.tail-edits.period is 0
> --
>
> Key: HDFS-14934
> URL: https://issues.apache.org/jira/browse/HDFS-14934
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-14934-01.patch
>
>
> When dfs.ha.tail-edits.period is 0ms (or very short-time), there are many 
> warn logs in standby NN.
> {noformat}
> 2019-10-25 16:25:46,945 [Logger channel (from parallel executor) to  hostname>/:] WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(55)) - Thread 
> (Thread[Logger channel (from parallel executor) to / address>:,5,main]) interrupted: 
> java.lang.InterruptedException
>   at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:509)
>   at 
> com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:82)
>   at 
> org.apache.hadoop.util.concurrent.ExecutorHelper.logThrowableFromAfterExecute(ExecutorHelper.java:48)
>   at 
> org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor.afterExecute(HadoopThreadPoolExecutor.java:90)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1157)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14934) [SBN Read] Standby NN throws many InterruptedExceptions when dfs.ha.tail-edits.period is 0

2019-12-28 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004470#comment-17004470
 ] 

Takanobu Asanuma commented on HDFS-14934:
-

+1 on [^HDFS-14934-01.patch]. Will commit it soon.

> [SBN Read] Standby NN throws many InterruptedExceptions when 
> dfs.ha.tail-edits.period is 0
> --
>
> Key: HDFS-14934
> URL: https://issues.apache.org/jira/browse/HDFS-14934
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-14934-01.patch
>
>
> When dfs.ha.tail-edits.period is 0ms (or very short-time), there are many 
> warn logs in standby NN.
> {noformat}
> 2019-10-25 16:25:46,945 [Logger channel (from parallel executor) to  hostname>/:] WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(55)) - Thread 
> (Thread[Logger channel (from parallel executor) to / address>:,5,main]) interrupted: 
> java.lang.InterruptedException
>   at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:509)
>   at 
> com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:82)
>   at 
> org.apache.hadoop.util.concurrent.ExecutorHelper.logThrowableFromAfterExecute(ExecutorHelper.java:48)
>   at 
> org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor.afterExecute(HadoopThreadPoolExecutor.java:90)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1157)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org