[jira] [Assigned] (HDFS-15507) [JDK 11] Fix javadoc errors in hadoop-hdfs-client module

2020-08-04 Thread Xieming Li (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xieming Li reassigned HDFS-15507:
-

Assignee: Xieming Li

> [JDK 11] Fix javadoc errors in hadoop-hdfs-client module
> 
>
> Key: HDFS-15507
> URL: https://issues.apache.org/jira/browse/HDFS-15507
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Major
>  Labels: newbie
>
> {noformat}
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientGSIContext.java:32:
>  error: self-closing element not allowed
> [ERROR]  * 
> [ERROR]^
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java:1245:
>  error: unexpected text
> [ERROR]* Same as {@link #create(String, FsPermission, EnumSet, boolean, 
> short, long,
> [ERROR]  ^
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java:161:
>  error: reference not found
> [ERROR]* {@link HdfsConstants#LEASE_HARDLIMIT_PERIOD hard limit}. Until 
> the
> [ERROR] ^
> {noformat}
> Full error log: 
> https://gist.github.com/aajisaka/7ab1c48a9bd7a0fdb11fa82eb04874d5
> How to reproduce the failure:
> * Remove {{true}} from pom.xml
> * Run {{mvn process-sources javadoc:javadoc-no-fork}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15506) [JDK 11] Fix javadoc errors in hadoop-hdfs module

2020-08-04 Thread Xieming Li (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xieming Li reassigned HDFS-15506:
-

Assignee: Xieming Li

> [JDK 11] Fix javadoc errors in hadoop-hdfs module
> -
>
> Key: HDFS-15506
> URL: https://issues.apache.org/jira/browse/HDFS-15506
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Major
>  Labels: newbie
>
> {noformat}
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminDefaultMonitor.java:43:
>  error: self-closing element not allowed
> [ERROR]  * 
> [ERROR]^
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java:682:
>  error: malformed HTML
> [ERROR]* a NameNode per second. Values <= 0 disable throttling. This 
> affects
> [ERROR]^
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java:1780:
>  error: exception not thrown: java.io.FileNotFoundException
> [ERROR]* @throws FileNotFoundException
> [ERROR]  ^
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java:176:
>  error: @param name not found
> [ERROR]* @param mtime The snapshot creation time set by Time.now().
> [ERROR] ^
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java:2187:
>  error: exception not thrown: java.lang.Exception
> [ERROR]* @exception Exception if the filesystem does not exist.
> [ERROR] ^
> {noformat}
> Full error log: 
> https://gist.github.com/aajisaka/a0c16f0408a623e798dd7df29fbddf82
> How to reproduce the failure:
> * Remove {{true}} from pom.xml
> * Run {{mvn process-sources javadoc:javadoc-no-fork}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15288) Add Available Space Rack Fault Tolerant BPP

2020-08-04 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17171182#comment-17171182
 ] 

Mingliang Liu commented on HDFS-15288:
--

Useful improvement! Do you mind adding a release notes into this JIRA since it 
brings new BPP as well as config changes? Thanks,

> Add Available Space Rack Fault Tolerant BPP
> ---
>
> Key: HDFS-15288
> URL: https://issues.apache.org/jira/browse/HDFS-15288
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15288-01.patch, HDFS-15288-02.patch, 
> HDFS-15288-03.patch, HDFS-15288-Addendum-01.patch
>
>
> The Present {{AvailableSpaceBlockPlacementPolicy}} extends the Default Block 
> Placement policy, which makes it apt for Replicated files. But not very 
> efficient for EC files, which by default use. 
> {{BlockPlacementPolicyRackFaultTolerant}}. So propose a to add new BPP having 
> similar optimization as ASBPP where as keeping the spread of Blocks to max 
> racks, i.e as RackFaultTolerantBPP.
> This could extend {{BlockPlacementPolicyRackFaultTolerant}}, rather than the 
> {{BlockPlacementPOlicyDefault}} like ASBPP and keep other logics of 
> optimization same as ASBPP



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15508) [JDK 11] Fix javadoc errors in hadoop-hdfs-rbf module

2020-08-04 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17171173#comment-17171173
 ] 

Mingliang Liu commented on HDFS-15508:
--

+1

checkstyle is related but I think we need that line longer than 80 chars

HADOOP-17179 is resolved. Will this get a clean javadoc report?

> [JDK 11] Fix javadoc errors in hadoop-hdfs-rbf module
> -
>
> Key: HDFS-15508
> URL: https://issues.apache.org/jira/browse/HDFS-15508
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-15508.01.patch
>
>
> {noformat}
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/security/token/package-info.java:21:
>  error: reference not found
> [ERROR]  * Implementations should extend {@link 
> AbstractDelegationTokenSecretManager}.
> [ERROR] ^
> {noformat}
> Full error log: 
> https://gist.github.com/aajisaka/a7dde76a4ba2942f60bf6230ec9ed6e1
> How to reproduce the failure:
> * Remove {{true}} from pom.xml
> * Run {{mvn process-sources javadoc:javadoc-no-fork}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15510) RBF: Quota and Content Summary was not correct in Multiple Destinations

2020-08-04 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17171013#comment-17171013
 ] 

Íñigo Goiri commented on HDFS-15510:


Overall we have two sets of quotas, the subcluster one and the general one.
The issue here seems to be that the general one is not accounted correctly, we 
would need to fix that.

> RBF: Quota and Content Summary was not correct in Multiple Destinations
> ---
>
> Key: HDFS-15510
> URL: https://issues.apache.org/jira/browse/HDFS-15510
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Critical
>
> steps :
> *) create a mount entry with multiple destinations ( for suppose 2)
> *) Set NS quota as 10 for mount entry by dfsrouteradmin command, Content 
> Summary on the Mount Entry shows NS quota as 20
> *) Create 10 files through router, on creating 11th file , NS Quota Exceeded 
> Exception is coming 
> though the Content Summary showing the NS quota as 20 , we are not able to 
> create 20 files
>  
> the problem here is router stores the mount entry's NS quota as 10 , but 
> invokes NS quota on both the name services by set NS quota as 10 , so content 
> summary on mount entry aggregates the content summary of both the name 
> services by making NS quota as 20



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-08-04 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HDFS-15025:


Assignee: YaYun Wang

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15510) RBF: Quota and Content Summary was not correct in Multiple Destinations

2020-08-04 Thread Hemanth Boyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17170878#comment-17170878
 ] 

Hemanth Boyina commented on HDFS-15510:
---

[~linyiqun] [~elgoiri]  [~tasanuma] have you come across this scenario anytime ?

any suggestions to solve the issue ?

> RBF: Quota and Content Summary was not correct in Multiple Destinations
> ---
>
> Key: HDFS-15510
> URL: https://issues.apache.org/jira/browse/HDFS-15510
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Critical
>
> steps :
> *) create a mount entry with multiple destinations ( for suppose 2)
> *) Set NS quota as 10 for mount entry by dfsrouteradmin command, Content 
> Summary on the Mount Entry shows NS quota as 20
> *) Create 10 files through router, on creating 11th file , NS Quota Exceeded 
> Exception is coming 
> though the Content Summary showing the NS quota as 20 , we are not able to 
> create 20 files
>  
> the problem here is router stores the mount entry's NS quota as 10 , but 
> invokes NS quota on both the name services by set NS quota as 10 , so content 
> summary on mount entry aggregates the content summary of both the name 
> services by making NS quota as 20



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15512) Remove smallBufferSize in DFSClient

2020-08-04 Thread Takanobu Asanuma (Jira)
Takanobu Asanuma created HDFS-15512:
---

 Summary: Remove smallBufferSize in DFSClient
 Key: HDFS-15512
 URL: https://issues.apache.org/jira/browse/HDFS-15512
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma


It seems an unused variable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-08-04 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17170689#comment-17170689
 ] 

huangtianhua commented on HDFS-15025:
-

I have proposed PR: https://github.com/apache/hadoop/pull/2189

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Priority: Major
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15497) Make snapshot limit on global as well per snapshot root directory configurable

2020-08-04 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-15497:
---
Fix Version/s: 3.4.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Make snapshot limit on global as well per snapshot root directory configurable
> --
>
> Key: HDFS-15497
> URL: https://issues.apache.org/jira/browse/HDFS-15497
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Affects Versions: 3.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15497.000.patch
>
>
> Currently, there is no configurable limit imposed on the no of snapshots 
> remaining in the system neither on the filesystem level nor on a snaphottable 
> root directory. Too many snapshots in the system can potentially bloat up the 
> namespace and with ordered deletion feature on , too many snapshots per 
> snapshottable root directory will make the deletion of the oldest snapshot 
> more expensive. This Jira aims to impose these configurable limits .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15492) Make trash root inside each snapshottable directory

2020-08-04 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-15492:
--
Status: Patch Available  (was: In Progress)

> Make trash root inside each snapshottable directory
> ---
>
> Key: HDFS-15492
> URL: https://issues.apache.org/jira/browse/HDFS-15492
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, hdfs-client
>Affects Versions: 3.2.1
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> We have seen FSImage corruption cases (e.g. HDFS-13101) where files inside 
> one snapshottable directories are moved outside of it. The most common case 
> of this is when trash is enabled and user deletes some file via the command 
> line without skipTrash.
> This jira aims to make a trash root for each snapshottable directory, same as 
> how encryption zone behaves at the moment.
> This will make trash cleanup a little bit more expensive on the NameNode as 
> it will be to iterate all trash roots. But should be fine as long as there 
> aren't many snapshottable directories.
> I could make this improvement as an option and disable it by default if 
> needed, such as {{dfs.namenode.snapshot.trashroot.enabled}}
> One small caveat though, when disabling (disallowing) snapshot on the 
> snapshottable directory when this improvement is in place. The client should 
> merge the snapshottable directory's trash with that user's trash to ensure 
> proper trash cleanup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15511) Support AvailableSpaceBlockPlacementPolicy in BlockPlacementPolicyRackFaultTolerant

2020-08-04 Thread Amithsha (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amithsha resolved HDFS-15511.
-
Resolution: Resolved

> Support AvailableSpaceBlockPlacementPolicy in 
> BlockPlacementPolicyRackFaultTolerant
> ---
>
> Key: HDFS-15511
> URL: https://issues.apache.org/jira/browse/HDFS-15511
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Amithsha
>Priority: Major
>
> As per BlockPlacementPolicyRackFaultTolerant, one block per rack is placed 
> but due to this Heterogeneous datanodes are not supported in Hadoop3. So we 
> need to change the BlockPlacementPolicyRackFaultTolerant to place one block 
> on a rack with the AvailableSpaceBlockPlacementPolicy feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15511) Support AvailableSpaceBlockPlacementPolicy in BlockPlacementPolicyRackFaultTolerant

2020-08-04 Thread Amithsha (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17170580#comment-17170580
 ] 

Amithsha commented on HDFS-15511:
-

[~ayushtkn] Yes this solves our use case! 

> Support AvailableSpaceBlockPlacementPolicy in 
> BlockPlacementPolicyRackFaultTolerant
> ---
>
> Key: HDFS-15511
> URL: https://issues.apache.org/jira/browse/HDFS-15511
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Amithsha
>Priority: Major
>
> As per BlockPlacementPolicyRackFaultTolerant, one block per rack is placed 
> but due to this Heterogeneous datanodes are not supported in Hadoop3. So we 
> need to change the BlockPlacementPolicyRackFaultTolerant to place one block 
> on a rack with the AvailableSpaceBlockPlacementPolicy feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org