[jira] [Commented] (HDFS-15098) Add SM4 encryption method for HDFS

2020-08-12 Thread liusheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176706#comment-17176706
 ] 

liusheng commented on HDFS-15098:
-

Hi [~weichiu],

Could you please help to review this patch? thank you.

> Add SM4 encryption method for HDFS
> --
>
> Key: HDFS-15098
> URL: https://issues.apache.org/jira/browse/HDFS-15098
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 3.4.0
>Reporter: liusheng
>Assignee: liusheng
>Priority: Major
>  Labels: sm4
> Attachments: HDFS-15098.001.patch, HDFS-15098.002.patch, 
> HDFS-15098.003.patch, HDFS-15098.004.patch, HDFS-15098.005.patch, 
> HDFS-15098.006.patch, HDFS-15098.007.patch, HDFS-15098.008.patch, 
> HDFS-15098.009.patch
>
>
> SM4 (formerly SMS4)is a block cipher used in the Chinese National Standard 
> for Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure).
>  SM4 was a cipher proposed to for the IEEE 802.11i standard, but has so far 
> been rejected by ISO. One of the reasons for the rejection has been 
> opposition to the WAPI fast-track proposal by the IEEE. please see:
> [https://en.wikipedia.org/wiki/SM4_(cipher)]
>  
> *Use sm4 on hdfs as follows:*
> 1.Configure Hadoop KMS
>  2.test HDFS sm4
>  hadoop key create key1 -cipher 'SM4/CTR/NoPadding'
>  hdfs dfs -mkdir /benchmarks
>  hdfs crypto -createZone -keyName key1 -path /benchmarks
> *requires:*
>  1.openssl version >=1.1.1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15527) Error On adding new Namespace

2020-08-12 Thread Thangamani Murugasamy (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176619#comment-17176619
 ] 

Thangamani Murugasamy commented on HDFS-15527:
--

I tried to format the new name node got below error. its using CM format.

args = [-format, -clusterId, cluster22, -nonInteractive]

 

2020-08-12 04:53:17,374 DEBUG org.apache.hadoop.ipc.Client: IPC Client 
(481651130) connection to journalnode12.x.com/xx.xx.xxx.18:8485 from 
hdfs/namenn08.x@domain.com: starting, having connections 3
2020-08-12 04:53:17,374 DEBUG org.apache.hadoop.ipc.Client: IPC Client 
(481651130) connection to journalnode09.x.com/xx.xx.xxx.15:8485 from 
hdfs/namenn08.x@domain.com: starting, having connections 3
2020-08-12 04:53:17,374 DEBUG org.apache.hadoop.ipc.Client: IPC Client 
(481651130) connection to journalnode08.x.com/xx.xx.xxx.14:8485 from 
hdfs/namenn08.x@domain.com: starting, having connections 3
2020-08-12 04:53:17,375 DEBUG org.apache.hadoop.ipc.Client: IPC Client 
(481651130) connection to journalnode09.x.com/xx.xx.xxx.15:8485 from 
hdfs/namenn08.x@domain.com sending #1 
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol.isFormatted
2020-08-12 04:53:17,375 DEBUG org.apache.hadoop.ipc.Client: IPC Client 
(481651130) connection to journalnode12.x.com/xx.xx.xxx.18:8485 from 
hdfs/namenn08.x@domain.com sending #0 
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol.isFormatted
2020-08-12 04:53:17,375 DEBUG org.apache.hadoop.ipc.Client: IPC Client 
(481651130) connection to journalnode08.x.com/xx.xx.xxx.14:8485 from 
hdfs/namenn08.x@domain.com sending #2 
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol.isFormatted
2020-08-12 04:53:17,382 DEBUG org.apache.hadoop.ipc.Client: IPC Client 
(481651130) connection to journalnode12.x.com/xx.xx.xxx.18:8485 from 
hdfs/namenn08.x@domain.com got value #0
2020-08-12 04:53:17,382 DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine: Call: 
isFormatted took 160ms
2020-08-12 04:53:17,385 DEBUG org.apache.hadoop.ipc.Client: IPC Client 
(481651130) connection to journalnode09.x.com/xx.xx.xxx.15:8485 from 
hdfs/namenn08.x@domain.com got value #1
2020-08-12 04:53:17,385 DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine: Call: 
isFormatted took 163ms
2020-08-12 04:53:17,426 DEBUG org.apache.hadoop.ipc.Client: IPC Client 
(481651130) connection to journalnode08.x.com/xx.xx.xxx.14:8485 from 
hdfs/namenn08.x@domain.com got value #2
2020-08-12 04:53:17,427 DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine: Call: 
isFormatted took 205ms
2020-08-12 04:53:17,429 DEBUG org.apache.hadoop.util.ExitUtil: Exiting with 
status 1: ExitException
1: ExitException
 at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:304)
 at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:292)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1612)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)

> Error On adding new Namespace
> -
>
> Key: HDFS-15527
> URL: https://issues.apache.org/jira/browse/HDFS-15527
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, ha, nn
>Affects Versions: 3.0.0
>Reporter: Thangamani Murugasamy
>Priority: Blocker
>
> We have one namespace, trying to add other one, always getting below error 
> message. 
>  
> The new name nodes never be part of existing name space, also don't see any 
> "nn" directories before adding name space.
>  
> 2020-08-12 04:59:53,947 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: NameNode is not formatted.
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:237)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1084)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:709)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:665)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:727)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:950)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:929)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1653)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)
> 2020-08-12 04:59:53,955 DEBUG 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Closing log when already 
> closed
> ==
>  
>  
> 2020-08-12 04:59:53,976 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> java.io.IOException: NameNode is not formatted.
>  at 

[jira] [Created] (HDFS-15529) getChildFilesystems should include fallback fs as well

2020-08-12 Thread Uma Maheswara Rao G (Jira)
Uma Maheswara Rao G created HDFS-15529:
--

 Summary: getChildFilesystems should include fallback fs as well
 Key: HDFS-15529
 URL: https://issues.apache.org/jira/browse/HDFS-15529
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: viewfs, viewfsOverloadScheme
Affects Versions: 3.4.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G


Currently getChildSystems API used by many other APIs, like 

getAdditionalTokenIssuers, getTrashRoots etc.

If fallBack filesystem not included in child filesystems, Application like YARN 
who uses getAdditionalTokenIssuers, would not get delegation tokens for 
fallback fs. This would be a critical bug for secure clusters.



Similarly, trashRoots. when applications tried to use trashRoots, it will not 
considers trash folders from fallback. So, it will leak from cleanup logics. 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15527) Error On adding new Namespace

2020-08-12 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176544#comment-17176544
 ] 

Mingliang Liu commented on HDFS-15527:
--

As you errors shows, did you format the NN first? Also did you follow the full 
documentation like 
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/Federation.html?
 This overall does not look like a bug unless you provide more context. For 
general usage questions, please send email to u...@hadoop.apache.org

> Error On adding new Namespace
> -
>
> Key: HDFS-15527
> URL: https://issues.apache.org/jira/browse/HDFS-15527
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, ha, nn
>Affects Versions: 3.0.0
>Reporter: Thangamani Murugasamy
>Priority: Blocker
>
> We have one namespace, trying to add other one, always getting below error 
> message. 
>  
> The new name nodes never be part of existing name space, also don't see any 
> "nn" directories before adding name space.
>  
> 2020-08-12 04:59:53,947 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: NameNode is not formatted.
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:237)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1084)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:709)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:665)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:727)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:950)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:929)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1653)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)
> 2020-08-12 04:59:53,955 DEBUG 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Closing log when already 
> closed
> ==
>  
>  
> 2020-08-12 04:59:53,976 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> java.io.IOException: NameNode is not formatted.
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:237)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1084)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:709)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:665)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:727)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:950)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:929)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1653)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)
> 2020-08-12 04:59:53,978 DEBUG org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: java.io.IOException: NameNode is not formatted.
> 1: java.io.IOException: NameNode is not formatted.
>  at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:265)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1726)
> Caused by: java.io.IOException: NameNode is not formatted.
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:237)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1084)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:709)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:665)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:727)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:950)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:929)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1653)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)
> 2020-08-12 04:59:53,979 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: java.io.IOException: NameNode is not formatted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15518) Wrong operation name in FsNamesystem for listSnapshots

2020-08-12 Thread Hemanth Boyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176534#comment-17176534
 ] 

Hemanth Boyina commented on HDFS-15518:
---

committed to trunk , thanks for the contribution [~aryangupta1998]

> Wrong operation name in FsNamesystem for listSnapshots
> --
>
> Key: HDFS-15518
> URL: https://issues.apache.org/jira/browse/HDFS-15518
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mukul Kumar Singh
>Assignee: Aryan Gupta
>Priority: Major
> Fix For: 3.4.0
>
>
> List snapshots makes use of listSnapshotDirectory as the string in place of 
> ListSnapshot.
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L7026



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15518) Wrong operation name in FsNamesystem for listSnapshots

2020-08-12 Thread Hemanth Boyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hemanth Boyina resolved HDFS-15518.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

> Wrong operation name in FsNamesystem for listSnapshots
> --
>
> Key: HDFS-15518
> URL: https://issues.apache.org/jira/browse/HDFS-15518
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mukul Kumar Singh
>Assignee: Aryan Gupta
>Priority: Major
> Fix For: 3.4.0
>
>
> List snapshots makes use of listSnapshotDirectory as the string in place of 
> ListSnapshot.
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L7026



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15524) Add edit log entry for Snapshot deletion GC thread snapshot deletion

2020-08-12 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-15524:
---
Status: Patch Available  (was: In Progress)

> Add edit log entry for Snapshot deletion GC thread snapshot deletion
> 
>
> Key: HDFS-15524
> URL: https://issues.apache.org/jira/browse/HDFS-15524
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>
> Currently, Snapshot deletion Gc thread doesn't create an edit log transaction 
> when the actual snapshot is garbage collected. In cases as such, what might 
> happen is, if the gc thread deletes  snapshots and then namenode is 
> restarted, snapshots which were garbage collected by the snapshot gc thread 
> prior restart will reapper till the gc thread again picks them up for garbage 
> collection as the edits were not captured for actual garbage collection and 
> at the same time data might have already been deleted from the datanodes 
> which may lead to too many spurious missing block alerts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-15524) Add edit log entry for Snapshot deletion GC thread snapshot deletion

2020-08-12 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-15524 started by Shashikant Banerjee.
--
> Add edit log entry for Snapshot deletion GC thread snapshot deletion
> 
>
> Key: HDFS-15524
> URL: https://issues.apache.org/jira/browse/HDFS-15524
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>
> Currently, Snapshot deletion Gc thread doesn't create an edit log transaction 
> when the actual snapshot is garbage collected. In cases as such, what might 
> happen is, if the gc thread deletes  snapshots and then namenode is 
> restarted, snapshots which were garbage collected by the snapshot gc thread 
> prior restart will reapper till the gc thread again picks them up for garbage 
> collection as the edits were not captured for actual garbage collection and 
> at the same time data might have already been deleted from the datanodes 
> which may lead to too many spurious missing block alerts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14815) RBF: Update the quota in MountTable when calling setQuota on a MountTable src.

2020-08-12 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176334#comment-17176334
 ] 

Jinglun commented on HDFS-14815:


Hi [~hemanthboyina], nice to see your concerns. Before we have this change, 
even the DFSAdmin can change the quota of the NameNode, the quota will finally 
be changed back by the Router. So the admin will see the quota is changed 
successfully and then magically changed back which is very confusing. So 
throwing an exception to remind the admin is very helpful. After all the 
'compatible quota update' is not really successful.

IMO the DFSAdmin shouldn't be able to change the quota of a mount point. All 
the mount table related works should be done with the RouterAdmin. I think 
that's why we have the RouterAdmin. 

About the exception, since the DFSAdmin is used by the administrator who has 
full knowledge of normal hdfs cluster and rbf hdfs cluster,  I think the 
administrator is able to understand it.  If the exception message is not clear 
enough may be we can make it something like ’xxx is not allowed to change quota 
of /path because it is a mount point. Use RouterAdmin instead.‘

> RBF: Update the quota in MountTable when calling setQuota on a MountTable src.
> --
>
> Key: HDFS-14815
> URL: https://issues.apache.org/jira/browse/HDFS-14815
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14815.001.patch, HDFS-14815.002.patch, 
> HDFS-14815.003.patch, HDFS-14815.004.patch, HDFS-14815.005.patch
>
>
> The method setQuota() can make the remote quota(the quota on real clusters) 
> inconsistent with the MountTable. I think we have 3 ways to fix it:
>  # Reject all the setQuota() rpcs if it trys to change the quota of a mount 
> table.
>  # Let setQuota() to change the mount table quota. Update the quota on zk 
> first and then update remote quotas.
>  # Do nothing. The RouterQuotaUpdateService will finally make all the remote 
> quota right. We can tolerate short-term inconsistencies.
> I'd like option 1 because I want the RouterAdmin to be the only entrance to 
> update the MountTable.
> Option 3 we don't need change anything, but the quota will be inconsistent 
> for a short-term. The remote quota will be effective immediately and 
> auto-changed back after a while. User might be confused about the behavior.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15527) Error On adding new Namespace

2020-08-12 Thread Thangamani Murugasamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thangamani Murugasamy updated HDFS-15527:
-
Component/s: ha
 federation
   Priority: Blocker  (was: Major)

> Error On adding new Namespace
> -
>
> Key: HDFS-15527
> URL: https://issues.apache.org/jira/browse/HDFS-15527
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, ha, nn
>Affects Versions: 3.0.0
>Reporter: Thangamani Murugasamy
>Priority: Blocker
>
> We have one namespace, trying to add other one, always getting below error 
> message. 
>  
> The new name nodes never be part of existing name space, also don't see any 
> "nn" directories before adding name space.
>  
> 2020-08-12 04:59:53,947 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: NameNode is not formatted.
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:237)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1084)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:709)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:665)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:727)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:950)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:929)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1653)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)
> 2020-08-12 04:59:53,955 DEBUG 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Closing log when already 
> closed
> ==
>  
>  
> 2020-08-12 04:59:53,976 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> java.io.IOException: NameNode is not formatted.
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:237)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1084)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:709)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:665)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:727)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:950)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:929)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1653)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)
> 2020-08-12 04:59:53,978 DEBUG org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: java.io.IOException: NameNode is not formatted.
> 1: java.io.IOException: NameNode is not formatted.
>  at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:265)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1726)
> Caused by: java.io.IOException: NameNode is not formatted.
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:237)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1084)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:709)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:665)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:727)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:950)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:929)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1653)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)
> 2020-08-12 04:59:53,979 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: java.io.IOException: NameNode is not formatted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15528) Not able to list encryption zone with federation

2020-08-12 Thread Thangamani Murugasamy (Jira)
Thangamani Murugasamy created HDFS-15528:


 Summary: Not able to list encryption zone with federation
 Key: HDFS-15528
 URL: https://issues.apache.org/jira/browse/HDFS-15528
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: encryption, federation
Affects Versions: 3.0.0
Reporter: Thangamani Murugasamy


 hdfs crypto -listZones
IllegalArgumentException: 'viewfs://cluster14' is not an HDFS URI.

 

--

debug log

20/08/12 05:53:14 DEBUG util.Shell: setsid exited with exit code 0
20/08/12 05:53:14 DEBUG lib.MutableMetricsFactory: field 
org.apache.hadoop.metrics2.lib.MutableRate 
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with 
annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[Rate of successful 
kerberos logins and latency (milliseconds)])
20/08/12 05:53:14 DEBUG lib.MutableMetricsFactory: field 
org.apache.hadoop.metrics2.lib.MutableRate 
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with 
annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[Rate of failed 
kerberos logins and latency (milliseconds)])
20/08/12 05:53:14 DEBUG lib.MutableMetricsFactory: field 
org.apache.hadoop.metrics2.lib.MutableRate 
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with 
annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[GetGroups])
20/08/12 05:53:14 DEBUG lib.MutableMetricsFactory: field private 
org.apache.hadoop.metrics2.lib.MutableGaugeLong 
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal 
with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[Renewal failures 
since startup])
20/08/12 05:53:14 DEBUG lib.MutableMetricsFactory: field private 
org.apache.hadoop.metrics2.lib.MutableGaugeInt 
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures with 
annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[Renewal failures 
since last successful login])
20/08/12 05:53:14 DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group 
related metrics
20/08/12 05:53:14 DEBUG security.SecurityUtil: Setting 
hadoop.security.token.service.use_ip to true
20/08/12 05:53:14 DEBUG security.Groups: Creating new Groups object
20/08/12 05:53:14 DEBUG security.Groups: Group mapping 
impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; 
cacheTimeout=30; warningDeltaMs=5000
20/08/12 05:53:14 DEBUG security.UserGroupInformation: hadoop login
20/08/12 05:53:14 DEBUG security.UserGroupInformation: hadoop login commit
20/08/12 05:53:14 DEBUG security.UserGroupInformation: using kerberos 
user:h...@corp.epsilon.com
20/08/12 05:53:14 DEBUG security.UserGroupInformation: Using user: 
"h...@corp.epsilon.com" with name h...@corp.epsilon.com
20/08/12 05:53:14 DEBUG security.UserGroupInformation: User entry: 
"h...@corp.epsilon.com"
20/08/12 05:53:14 DEBUG security.UserGroupInformation: UGI 
loginUser:h...@corp.epsilon.com (auth:KERBEROS)
20/08/12 05:53:14 DEBUG security.UserGroupInformation: Current time is 
1597233194735
20/08/12 05:53:14 DEBUG security.UserGroupInformation: Next refresh is 
1597261977000
20/08/12 05:53:14 DEBUG core.Tracer: sampler.classes = ; loaded no samplers
20/08/12 05:53:14 DEBUG core.Tracer: span.receiver.classes = ; loaded no span 
receivers
20/08/12 05:53:14 DEBUG fs.FileSystem: Loading filesystems
20/08/12 05:53:14 DEBUG fs.FileSystem: s3n:// = class 
org.apache.hadoop.fs.s3native.NativeS3FileSystem from 
/opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/jars/hadoop-aws-3.0.0-cdh6.2.1.jar
20/08/12 05:53:14 DEBUG fs.FileSystem: gs:// = class 
com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem from 
/opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/lib/hadoop/gcs-connector-hadoop3-1.9.10-cdh6.2.1-shaded.jar
20/08/12 05:53:14 DEBUG fs.FileSystem: file:// = class 
org.apache.hadoop.fs.LocalFileSystem from 
/opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/jars/hadoop-common-3.0.0-cdh6.2.1.jar
20/08/12 05:53:14 DEBUG fs.FileSystem: viewfs:// = class 
org.apache.hadoop.fs.viewfs.ViewFileSystem from 
/opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/jars/hadoop-common-3.0.0-cdh6.2.1.jar
20/08/12 05:53:14 DEBUG fs.FileSystem: ftp:// = class 
org.apache.hadoop.fs.ftp.FTPFileSystem from 
/opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/jars/hadoop-common-3.0.0-cdh6.2.1.jar
20/08/12 05:53:14 DEBUG fs.FileSystem: har:// = class 
org.apache.hadoop.fs.HarFileSystem from 

[jira] [Created] (HDFS-15527) Error On adding new Namespace

2020-08-12 Thread Thangamani Murugasamy (Jira)
Thangamani Murugasamy created HDFS-15527:


 Summary: Error On adding new Namespace
 Key: HDFS-15527
 URL: https://issues.apache.org/jira/browse/HDFS-15527
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nn
Affects Versions: 3.0.0
Reporter: Thangamani Murugasamy


We have one namespace, trying to add other one, always getting below error 
message. 

 

The new name nodes never be part of existing name space, also don't see any 
"nn" directories before adding name space.

 

2020-08-12 04:59:53,947 WARN 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
loading fsimage
java.io.IOException: NameNode is not formatted.
 at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:237)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1084)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:709)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:665)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:727)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:950)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:929)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1653)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)
2020-08-12 04:59:53,955 DEBUG org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Closing log when already closed

==

 

 

2020-08-12 04:59:53,976 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: 
Failed to start namenode.
java.io.IOException: NameNode is not formatted.
 at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:237)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1084)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:709)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:665)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:727)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:950)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:929)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1653)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)
2020-08-12 04:59:53,978 DEBUG org.apache.hadoop.util.ExitUtil: Exiting with 
status 1: java.io.IOException: NameNode is not formatted.
1: java.io.IOException: NameNode is not formatted.
 at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:265)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1726)
Caused by: java.io.IOException: NameNode is not formatted.
 at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:237)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1084)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:709)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:665)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:727)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:950)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:929)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1653)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)
2020-08-12 04:59:53,979 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
status 1: java.io.IOException: NameNode is not formatted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15493) Update block map and name cache in parallel while loading fsimage.

2020-08-12 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15493:
-
Target Version/s: 3.4.0
  Resolution: Fixed
  Status: Resolved  (was: Patch Available)

Committed to trunk and branch-3.3. Thanks for the contribution [~smarthan].

> Update block map and name cache in parallel while loading fsimage.
> --
>
> Key: HDFS-15493
> URL: https://issues.apache.org/jira/browse/HDFS-15493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chengwei Wang
>Assignee: Chengwei Wang
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15493.001.patch, HDFS-15493.002.patch, 
> HDFS-15493.003.patch, HDFS-15493.004.patch, HDFS-15493.005.patch, 
> HDFS-15493.006.patch, HDFS-15493.007.patch, HDFS-15493.008.patch, 
> fsimage-loading.log
>
>
> While loading INodeDirectorySection of fsimage, it will update name cache and 
> block map after added inode file to inode directory. It would reduce time 
> cost of fsimage loading to enable these steps run in parallel.
> In our test case, with patch HDFS-13694 and HDFS-14617, the time cost to load 
> fsimage (220M files & 240M blocks) is 470s, with this patch , the time cost 
> reduce to 410s.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15493) Update block map and name cache in parallel while loading fsimage.

2020-08-12 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15493:
-
Fix Version/s: 3.4.0
   3.3.1

> Update block map and name cache in parallel while loading fsimage.
> --
>
> Key: HDFS-15493
> URL: https://issues.apache.org/jira/browse/HDFS-15493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chengwei Wang
>Assignee: Chengwei Wang
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15493.001.patch, HDFS-15493.002.patch, 
> HDFS-15493.003.patch, HDFS-15493.004.patch, HDFS-15493.005.patch, 
> HDFS-15493.006.patch, HDFS-15493.007.patch, HDFS-15493.008.patch, 
> fsimage-loading.log
>
>
> While loading INodeDirectorySection of fsimage, it will update name cache and 
> block map after added inode file to inode directory. It would reduce time 
> cost of fsimage loading to enable these steps run in parallel.
> In our test case, with patch HDFS-13694 and HDFS-14617, the time cost to load 
> fsimage (220M files & 240M blocks) is 470s, with this patch , the time cost 
> reduce to 410s.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15526) Tests in TestOzoneFileSystem should use the existing MiniOzoneCluster

2020-08-12 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-15526:
--
Status: Patch Available  (was: Open)

> Tests in TestOzoneFileSystem should use the existing MiniOzoneCluster
> -
>
> Key: HDFS-15526
> URL: https://issues.apache.org/jira/browse/HDFS-15526
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Trivial
>
> In HDDS-2883, [~adoroszlai] made a change that greatly reduces the run time 
> of the test suite {{TestOzoneFileSystem}} by sharing one {{MiniOzoneCluster}} 
> among the tests.
> But 4 new tests have been added since and are not sharing that 
> {{MiniOzoneCluster}}.
> I am able to cut down the run time of {{TestOzoneFileSystem}} from 3m18s to 
> 1m2s on my Mac. It would only save more run time on GitHub Workflow.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15526) Tests in TestOzoneFileSystem should use the existing MiniOzoneCluster

2020-08-12 Thread Siyao Meng (Jira)
Siyao Meng created HDFS-15526:
-

 Summary: Tests in TestOzoneFileSystem should use the existing 
MiniOzoneCluster
 Key: HDFS-15526
 URL: https://issues.apache.org/jira/browse/HDFS-15526
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Reporter: Siyao Meng
Assignee: Siyao Meng


In HDDS-2883, [~adoroszlai] made a change that greatly reduces the run time of 
the test suite {{TestOzoneFileSystem}} by sharing one {{MiniOzoneCluster}} 
among the tests.

But 4 new tests have been added since and are not sharing that 
{{MiniOzoneCluster}}.

I am able to cut down the run time of {{TestOzoneFileSystem}} from 3m18s to 
1m2s on my Mac. It would only save more run time on GitHub Workflow.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15510) RBF: Quota and Content Summary was not correct in Multiple Destinations

2020-08-12 Thread Hemanth Boyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176129#comment-17176129
 ] 

Hemanth Boyina edited comment on HDFS-15510 at 8/12/20, 8:04 AM:
-

{quote}The issue here seems to be that the general one is not accounted 
correctly, we would need to fix that.
{quote}
for an example : if we set namespace quota as 10 on mount entry with 2 
destinations  , for the both destinations we set quota as 10 each and  in mount 
table store for the mount entry we set namespace quota as 10  which is not 
correct 

 

Periodic Invoke gets quota usage from the destinations name service and gets 
Quota set value from mount table store 


was (Author: hemanthboyina):
{quote}The issue here seems to be that the general one is not accounted 
correctly, we would need to fix that.
{quote}
for an example : if we set namespace quota as 10 on mount entry with 2 
destinations  , for the both destinations we set quota as 10 each and  in mount 
table store for the mount entry we set namespace quota as 10  which is not 
correct 

> RBF: Quota and Content Summary was not correct in Multiple Destinations
> ---
>
> Key: HDFS-15510
> URL: https://issues.apache.org/jira/browse/HDFS-15510
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Critical
>
> steps :
> *) create a mount entry with multiple destinations ( for suppose 2)
> *) Set NS quota as 10 for mount entry by dfsrouteradmin command, Content 
> Summary on the Mount Entry shows NS quota as 20
> *) Create 10 files through router, on creating 11th file , NS Quota Exceeded 
> Exception is coming 
> though the Content Summary showing the NS quota as 20 , we are not able to 
> create 20 files
>  
> the problem here is router stores the mount entry's NS quota as 10 , but 
> invokes NS quota on both the name services by set NS quota as 10 , so content 
> summary on mount entry aggregates the content summary of both the name 
> services by making NS quota as 20



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15510) RBF: Quota and Content Summary was not correct in Multiple Destinations

2020-08-12 Thread Hemanth Boyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17176129#comment-17176129
 ] 

Hemanth Boyina commented on HDFS-15510:
---

{quote}The issue here seems to be that the general one is not accounted 
correctly, we would need to fix that.
{quote}
for an example : if we set namespace quota as 10 on mount entry with 2 
destinations  , for the both destinations we set quota as 10 each and  in mount 
table store for the mount entry we set namespace quota as 10  which is not 
correct 

> RBF: Quota and Content Summary was not correct in Multiple Destinations
> ---
>
> Key: HDFS-15510
> URL: https://issues.apache.org/jira/browse/HDFS-15510
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Critical
>
> steps :
> *) create a mount entry with multiple destinations ( for suppose 2)
> *) Set NS quota as 10 for mount entry by dfsrouteradmin command, Content 
> Summary on the Mount Entry shows NS quota as 20
> *) Create 10 files through router, on creating 11th file , NS Quota Exceeded 
> Exception is coming 
> though the Content Summary showing the NS quota as 20 , we are not able to 
> create 20 files
>  
> the problem here is router stores the mount entry's NS quota as 10 , but 
> invokes NS quota on both the name services by set NS quota as 10 , so content 
> summary on mount entry aggregates the content summary of both the name 
> services by making NS quota as 20



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15525) Make trash root inside each snapshottable directory for WebHDFS

2020-08-12 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-15525:
--
Status: Patch Available  (was: Open)

> Make trash root inside each snapshottable directory for WebHDFS
> ---
>
> Key: HDFS-15525
> URL: https://issues.apache.org/jira/browse/HDFS-15525
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: webhdfs
>Affects Versions: 3.3.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Same objective as HDFS-15492, but for WebHDFS due to it having a different 
> call path for {{getTrashRoot}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15525) Make trash root inside each snapshottable directory for WebHDFS

2020-08-12 Thread Siyao Meng (Jira)
Siyao Meng created HDFS-15525:
-

 Summary: Make trash root inside each snapshottable directory for 
WebHDFS
 Key: HDFS-15525
 URL: https://issues.apache.org/jira/browse/HDFS-15525
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Affects Versions: 3.3.0
Reporter: Siyao Meng
Assignee: Siyao Meng


Same objective as HDFS-15492, but for WebHDFS due to it having a different call 
path for {{getTrashRoot}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org