[jira] [Resolved] (HDFS-14813) RBF: Make Global quota and Remote quota consistent.

2020-09-24 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun resolved HDFS-14813.

Resolution: Fixed

Resolve this issue.

> RBF: Make Global quota and Remote quota consistent.
> ---
>
> Key: HDFS-14813
> URL: https://issues.apache.org/jira/browse/HDFS-14813
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
>
> Make Global quota and remote quota consistent.
>  (Global quota: the quota on mount table, Remote quota: the quota on 
> namespace)
> HDFS administrator can use global quota to simplify the management for 
> federation paths. But there is no consistent constraint for the global quota 
> and the remote quota. As an hdfs administrator the inconsistent has 3 
> disadvantages on management:
>      1. The quota part of getQuotaUsage() on a federation path is not 
> helpful. It's neither the global quota nor one of the remote quotas.
>      2. The global quota could be different with the remote quota. When a 
> QuotaExceedException happens it needs the administrator to find out whether 
> it's a violation of the global quota or the remote quota.
>      3. For management simplicity, it's always a good idea to keep the global 
> quota and the remote quota the same. Now we need the administrator to keep 
> the consistent manually.
>  My proposal is to add a constraint for global quota: 
>      1. For federation paths, global quota could be inherited from parent 
> federation path.
>      2. For all remote paths in mount tables, the remote quotas must be 
> consistent with the global quotas.
>  To implement this, my idea is:
>      1. Global quota could be inherited. Add a method getGlobalQuota(String 
> path) to Quota.java returning the global quota.
>      2. Each time RouterQuotaUpdateService updates the quota usage for mount 
> table entries, it also checks and updates the remote quota.
>      3. When getQuotaUsage() on a federation path, return the global quota.
>      4. When setQuota() on a federation path, first update the global quota 
> in mount table, then recompute global quota for the current path and its 
> children paths, finally update all the federation paths.
>  
> Implement 1+2 in HDFS-14814
> Implement 4 in HDFS-14815
> Implement 3 in HDFS-14955



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15599) RBF: Add API to expose resolved destinations (namespace) in Router

2020-09-24 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201870#comment-17201870
 ] 

Ayush Saxena commented on HDFS-15599:
-

Something similar expectation as what HDFS-14249 added?

> RBF: Add API to expose resolved destinations (namespace) in Router
> --
>
> Key: HDFS-15599
> URL: https://issues.apache.org/jira/browse/HDFS-15599
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
>
> We have seen quite often requests like where a path in Router is actually 
> pointed. Two main use cases are:
> 1) Calculate the HDFS capacity usage allocation of all Hive tables, whose 
> have onboarded to Router.
> 2) A failure prevention method for cross-cluster rename. First check the 
> source HDFS location and dest HDFS location, and then issue a distcp cmd if 
> possible to avoid the Exception.
> Inside Router, the function getLocationsForPath does the work but it is 
> internal only and not visible to Clients.
> RouterAdmin has getMountTableEntries but this is a cast of Mount table 
> without any resolving.
>  
> We are proposing adding such an API, and there are two ways:
> 1) Adding this API in RouterRpcServer, which requires the change in 
> ClientNameNodeProtocol to include this new API. 
> 2) Adding this API in RouterAdminServer, which requires the a protocol 
> between Client and the admin server.
>  
> There is one existing resolvePath in FileSystem which can be used to 
> implement this call from client side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15599) RBF: Add API to expose resolved destinations (namespace) in Router

2020-09-24 Thread Fengnan Li (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201825#comment-17201825
 ] 

Fengnan Li commented on HDFS-15599:
---

[~inigoiri] [~ayushtkn] What's your thought on this one?

> RBF: Add API to expose resolved destinations (namespace) in Router
> --
>
> Key: HDFS-15599
> URL: https://issues.apache.org/jira/browse/HDFS-15599
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
>
> We have seen quite often requests like where a path in Router is actually 
> pointed. Two main use cases are:
> 1) Calculate the HDFS capacity usage allocation of all Hive tables, whose 
> have onboarded to Router.
> 2) A failure prevention method for cross-cluster rename. First check the 
> source HDFS location and dest HDFS location, and then issue a distcp cmd if 
> possible to avoid the Exception.
> Inside Router, the function getLocationsForPath does the work but it is 
> internal only and not visible to Clients.
> RouterAdmin has getMountTableEntries but this is a cast of Mount table 
> without any resolving.
>  
> We are proposing adding such an API, and there are two ways:
> 1) Adding this API in RouterRpcServer, which requires the change in 
> ClientNameNodeProtocol to include this new API. 
> 2) Adding this API in RouterAdminServer, which requires the a protocol 
> between Client and the admin server.
>  
> There is one existing resolvePath in FileSystem which can be used to 
> implement this call from client side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15599) RBF: Add API to expose resolved destinations (namespace) in Router

2020-09-24 Thread Fengnan Li (Jira)
Fengnan Li created HDFS-15599:
-

 Summary: RBF: Add API to expose resolved destinations (namespace) 
in Router
 Key: HDFS-15599
 URL: https://issues.apache.org/jira/browse/HDFS-15599
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Fengnan Li
Assignee: Fengnan Li


We have seen quite often requests like where a path in Router is actually 
pointed. Two main use cases are:

1) Calculate the HDFS capacity usage allocation of all Hive tables, whose have 
onboarded to Router.

2) A failure prevention method for cross-cluster rename. First check the source 
HDFS location and dest HDFS location, and then issue a distcp cmd if possible 
to avoid the Exception.

Inside Router, the function getLocationsForPath does the work but it is 
internal only and not visible to Clients.

RouterAdmin has getMountTableEntries but this is a cast of Mount table without 
any resolving.

 

We are proposing adding such an API, and there are two ways:

1) Adding this API in RouterRpcServer, which requires the change in 
ClientNameNodeProtocol to include this new API. 

2) Adding this API in RouterAdminServer, which requires the a protocol between 
Client and the admin server.

 

There is one existing resolvePath in FileSystem which can be used to implement 
this call from client side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15594) Lazy calculate live datanodes in safe mode tip

2020-09-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15594?focusedWorklogId=490460=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-490460
 ]

ASF GitHub Bot logged work on HDFS-15594:
-

Author: ASF GitHub Bot
Created on: 24/Sep/20 23:19
Start Date: 24/Sep/20 23:19
Worklog Time Spent: 10m 
  Work Description: goiri commented on pull request #2332:
URL: https://github.com/apache/hadoop/pull/2332#issuecomment-698636026


   Thanks @NickyYe for the StringBuilder, that will reduce some memory too.
   I'm a little surprised that Yetus is not kicking in.
   @ayushtkn are you familiar on how to kick it? And now that I have your 
attention... any issues you can see with changing the message.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 490460)
Time Spent: 40m  (was: 0.5h)

> Lazy calculate live datanodes in safe mode tip
> --
>
> Key: HDFS-15594
> URL: https://issues.apache.org/jira/browse/HDFS-15594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Ye Ni
>Assignee: Ye Ni
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Safe mode tip is printed every 20 seconds.
> This change is to calculate live datanodes until reported block threshold is 
> meet.
>  Old 
> {code:java}
> STATE* Safe mode ON. The reported blocks 111054015 needs additional 27902753 
> blocks to reach the threshold 0.9990 of total blocks 139095856. The number of 
> live datanodes 2531 has reached the minimum number 1. Safe mode will be 
> turned off automatically once the thresholds have been reached.{code}
> New 
> {code:java}
> STATE* Safe mode ON. 
> The reported blocks 134851250 needs additional 3218494 blocks to reach the 
> threshold 0.9990 of total blocks 138207947.
> The number of live datanodes is not calculated since reported blocks hasn't 
> reached the threshold. Safe mode will be turned off automatically once the 
> thresholds have been reached.{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15548) Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15548?focusedWorklogId=490441=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-490441
 ]

ASF GitHub Bot logged work on HDFS-15548:
-

Author: ASF GitHub Bot
Created on: 24/Sep/20 22:25
Start Date: 24/Sep/20 22:25
Worklog Time Spent: 10m 
  Work Description: LeonGao91 commented on pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#issuecomment-698620014


   @Hexiaoqiao Thanks for the comments! I have replied and please let me know 
if it makes sense to you.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 490441)
Time Spent: 5h  (was: 4h 50m)

> Allow configuring DISK/ARCHIVE storage types on same device mount
> -
>
> Key: HDFS-15548
> URL: https://issues.apache.org/jira/browse/HDFS-15548
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> We can allow configuring DISK/ARCHIVE storage types on the same device mount 
> on two separate directories.
> Users should be able to configure the capacity for each. Also, the datanode 
> usage report should report stats correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Closed] (HDFS-15595) TestSnapshotCommands.testMaxSnapshotLimit fails in trunk

2020-09-24 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu closed HDFS-15595.


> TestSnapshotCommands.testMaxSnapshotLimit fails in trunk
> 
>
> Key: HDFS-15595
> URL: https://issues.apache.org/jira/browse/HDFS-15595
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, snapshots, test
>Reporter: Mingliang Liu
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.4.0
>
>
> See 
> [this|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/1/testReport/org.apache.hadoop.hdfs/TestSnapshotCommands/testMaxSnapshotLimit/]
>  for a sample error.
> Sample error stack:
> {quote}
> Error Message
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
> Stacktrace
> java.lang.AssertionError: 
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.apache.hadoop.hdfs.DFSTestUtil.toolRun(DFSTestUtil.java:1934)
>   at org.apache.hadoop.hdfs.DFSTestUtil.FsShellRun(DFSTestUtil.java:1942)
>   at 
> org.apache.hadoop.hdfs.TestSnapshotCommands.testMaxSnapshotLimit(TestSnapshotCommands.java:130)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {quote}
> I can also reproduce this locally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15595) TestSnapshotCommands.testMaxSnapshotLimit fails in trunk

2020-09-24 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201799#comment-17201799
 ] 

Mingliang Liu commented on HDFS-15595:
--

Because there is no code change for this JIRA, I close this JIRA with empty 
"Fix Version/s" so release manager does not need to look at this one. Thanks 
[~shashikant] for taking care of it.

> TestSnapshotCommands.testMaxSnapshotLimit fails in trunk
> 
>
> Key: HDFS-15595
> URL: https://issues.apache.org/jira/browse/HDFS-15595
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, snapshots, test
>Reporter: Mingliang Liu
>Assignee: Shashikant Banerjee
>Priority: Major
>
> See 
> [this|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/1/testReport/org.apache.hadoop.hdfs/TestSnapshotCommands/testMaxSnapshotLimit/]
>  for a sample error.
> Sample error stack:
> {quote}
> Error Message
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
> Stacktrace
> java.lang.AssertionError: 
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.apache.hadoop.hdfs.DFSTestUtil.toolRun(DFSTestUtil.java:1934)
>   at org.apache.hadoop.hdfs.DFSTestUtil.FsShellRun(DFSTestUtil.java:1942)
>   at 
> org.apache.hadoop.hdfs.TestSnapshotCommands.testMaxSnapshotLimit(TestSnapshotCommands.java:130)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {quote}
> I can also reproduce this locally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15595) TestSnapshotCommands.testMaxSnapshotLimit fails in trunk

2020-09-24 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15595:
-
Fix Version/s: (was: 3.4.0)

> TestSnapshotCommands.testMaxSnapshotLimit fails in trunk
> 
>
> Key: HDFS-15595
> URL: https://issues.apache.org/jira/browse/HDFS-15595
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, snapshots, test
>Reporter: Mingliang Liu
>Assignee: Shashikant Banerjee
>Priority: Major
>
> See 
> [this|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/1/testReport/org.apache.hadoop.hdfs/TestSnapshotCommands/testMaxSnapshotLimit/]
>  for a sample error.
> Sample error stack:
> {quote}
> Error Message
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
> Stacktrace
> java.lang.AssertionError: 
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.apache.hadoop.hdfs.DFSTestUtil.toolRun(DFSTestUtil.java:1934)
>   at org.apache.hadoop.hdfs.DFSTestUtil.FsShellRun(DFSTestUtil.java:1942)
>   at 
> org.apache.hadoop.hdfs.TestSnapshotCommands.testMaxSnapshotLimit(TestSnapshotCommands.java:130)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {quote}
> I can also reproduce this locally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-24 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15025:
-
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15548) Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15548?focusedWorklogId=490434=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-490434
 ]

ASF GitHub Bot logged work on HDFS-15548:
-

Author: ASF GitHub Bot
Created on: 24/Sep/20 21:57
Start Date: 24/Sep/20 21:57
Worklog Time Spent: 10m 
  Work Description: LeonGao91 commented on a change in pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#discussion_r494633283



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
##
@@ -62,9 +64,14 @@
   private final VolumeChoosingPolicy blockChooser;
   private final BlockScanner blockScanner;
 
+  private boolean enableSameDiskTiering;

Review comment:
   Good catch! I will update them to be the same name.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 490434)
Time Spent: 4h 50m  (was: 4h 40m)

> Allow configuring DISK/ARCHIVE storage types on same device mount
> -
>
> Key: HDFS-15548
> URL: https://issues.apache.org/jira/browse/HDFS-15548
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> We can allow configuring DISK/ARCHIVE storage types on the same device mount 
> on two separate directories.
> Users should be able to configure the capacity for each. Also, the datanode 
> usage report should report stats correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15548) Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15548?focusedWorklogId=490433=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-490433
 ]

ASF GitHub Bot logged work on HDFS-15548:
-

Author: ASF GitHub Bot
Created on: 24/Sep/20 21:56
Start Date: 24/Sep/20 21:56
Worklog Time Spent: 10m 
  Work Description: LeonGao91 commented on a change in pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#discussion_r494633214



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
##
@@ -452,7 +487,33 @@ public long getAvailable() throws IOException {
   }
 
   long getActualNonDfsUsed() throws IOException {
-return usage.getUsed() - getDfsUsed();
+// DISK and ARCHIVAL on same disk

Review comment:
   Commented with an example use case as above, hopefully it explains well 
: )





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 490433)
Time Spent: 4h 40m  (was: 4.5h)

> Allow configuring DISK/ARCHIVE storage types on same device mount
> -
>
> Key: HDFS-15548
> URL: https://issues.apache.org/jira/browse/HDFS-15548
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> We can allow configuring DISK/ARCHIVE storage types on the same device mount 
> on two separate directories.
> Users should be able to configure the capacity for each. Also, the datanode 
> usage report should report stats correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15548) Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15548?focusedWorklogId=490431=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-490431
 ]

ASF GitHub Bot logged work on HDFS-15548:
-

Author: ASF GitHub Bot
Created on: 24/Sep/20 21:55
Start Date: 24/Sep/20 21:55
Worklog Time Spent: 10m 
  Work Description: LeonGao91 commented on a change in pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#discussion_r494632863



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
##
@@ -412,16 +435,28 @@ long getBlockPoolUsed(String bpid) throws IOException {
*/
   @VisibleForTesting
   public long getCapacity() {
+long capacity;
 if (configuredCapacity < 0L) {
   long remaining;
   if (cachedCapacity > 0L) {
 remaining = cachedCapacity - getReserved();
   } else {
 remaining = usage.getCapacity() - getReserved();
   }
-  return Math.max(remaining, 0L);
+  capacity = Math.max(remaining, 0L);
+} else {
+  capacity = configuredCapacity;
+}
+
+if (enableSameDiskArchival) {

Review comment:
   This is actually the important part to enable this feature, to allow 
users to configure the capacity of a fsVolume.
   
   As we are configuring two fsVolume on the same underlying filesystem, if we 
do nothing the capacity will be calculated twice thus all the stats being 
reported will be incorrect.
   
   Here is an example:
   Let's say we want to configure `[DISK]/data01/dfs` and 
`[ARCHIVE]/data01/dfs_archive` on a 4TB disk mount `/data01`, and we want to 
assign 1 TB to `[DISK]/data01/dfs` and 3 TB for `[ARCHIVE]/data01/dfs_archive`, 
we can make `reservedForArchive` to be 0.75 and put those two dirs in the 
volume list.
   
   In this case, `/data01/dfs` will be reported as a 1TB volume and 
`/data01/dfs_archive` will be reported as 3TB volume to HDFS. Logically, HDFS 
will just treat them as two separate volumes.
   
   If we don't make the change here, HDFS will see two volumes and each of them 
is 4TB, in that case, the 4TB disk will be counted as 4 * 2 = 8TB capacity in 
namenode and all the related stats will be wrong.
   
   Another change we need to make is the `getActualNonDfsUsed()` as below. 
Let's say in the above 4TB disk setup we use 0.1TB as reserved, and 
`[ARCHIVE]/data01/dfs_archive` already has 2TB capacity used, in this case when 
we are calculating the `getActualNonDfsUsed()` for `[DISK]/data01/dfs` it will 
always return 0, which is not correct and it will cause other weird issues. As 
the two fsVolumes are on the same filesystem, the reserved space should be 
shared.
   
   According to our analysis and cluster testing result, updating these two 
functions `getCapacity()` and `getActualNonDfsUsed()` is enough to keep stats 
correct for the two "logical" fsVolumes on same disk.
   
   I can update the java doc to reflect this when the feature is turned on.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 490431)
Time Spent: 4.5h  (was: 4h 20m)

> Allow configuring DISK/ARCHIVE storage types on same device mount
> -
>
> Key: HDFS-15548
> URL: https://issues.apache.org/jira/browse/HDFS-15548
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> We can allow configuring DISK/ARCHIVE storage types on the same device mount 
> on two separate directories.
> Users should be able to configure the capacity for each. Also, the datanode 
> usage report should report stats correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15594) Lazy calculate live datanodes in safe mode tip

2020-09-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15594?focusedWorklogId=490430=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-490430
 ]

ASF GitHub Bot logged work on HDFS-15594:
-

Author: ASF GitHub Bot
Created on: 24/Sep/20 21:36
Start Date: 24/Sep/20 21:36
Worklog Time Spent: 10m 
  Work Description: NickyYe commented on a change in pull request #2332:
URL: https://github.com/apache/hadoop/pull/2332#discussion_r494624826



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerSafeMode.java
##
@@ -309,16 +311,21 @@ String getSafeModeTip() {
 }
 
 if (datanodeThreshold > 0) {
-  int numLive = blockManager.getDatanodeManager().getNumLiveDataNodes();
-  if (numLive < datanodeThreshold) {
-msg += String.format(
-"The number of live datanodes %d needs an additional %d live "
-+ "datanodes to reach the minimum number %d.%n",
-numLive, (datanodeThreshold - numLive), datanodeThreshold);
+  if (isBlockThresholdMet) {
+int numLive = blockManager.getDatanodeManager().getNumLiveDataNodes();
+if (numLive < datanodeThreshold) {
+  msg += String.format(
+  "The number of live datanodes %d needs an additional %d live "
+  + "datanodes to reach the minimum number %d.%n",
+  numLive, (datanodeThreshold - numLive), datanodeThreshold);
+} else {
+  msg += String.format("The number of live datanodes %d has reached "
+  + "the minimum number %d. ",
+  numLive, datanodeThreshold);
+}
   } else {
-msg += String.format("The number of live datanodes %d has reached "
-+ "the minimum number %d. ",
-numLive, datanodeThreshold);
+msg += "The number of live datanodes is not calculated " +

Review comment:
   fixed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 490430)
Time Spent: 0.5h  (was: 20m)

> Lazy calculate live datanodes in safe mode tip
> --
>
> Key: HDFS-15594
> URL: https://issues.apache.org/jira/browse/HDFS-15594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Ye Ni
>Assignee: Ye Ni
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Safe mode tip is printed every 20 seconds.
> This change is to calculate live datanodes until reported block threshold is 
> meet.
>  Old 
> {code:java}
> STATE* Safe mode ON. The reported blocks 111054015 needs additional 27902753 
> blocks to reach the threshold 0.9990 of total blocks 139095856. The number of 
> live datanodes 2531 has reached the minimum number 1. Safe mode will be 
> turned off automatically once the thresholds have been reached.{code}
> New 
> {code:java}
> STATE* Safe mode ON. 
> The reported blocks 134851250 needs additional 3218494 blocks to reach the 
> threshold 0.9990 of total blocks 138207947.
> The number of live datanodes is not calculated since reported blocks hasn't 
> reached the threshold. Safe mode will be turned off automatically once the 
> thresholds have been reached.{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15548) Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15548?focusedWorklogId=490418=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-490418
 ]

ASF GitHub Bot logged work on HDFS-15548:
-

Author: ASF GitHub Bot
Created on: 24/Sep/20 20:44
Start Date: 24/Sep/20 20:44
Worklog Time Spent: 10m 
  Work Description: LeonGao91 commented on a change in pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#discussion_r494599382



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
##
@@ -190,6 +193,26 @@
 }
 this.conf = conf;
 this.fileIoProvider = fileIoProvider;
+this.enableSameDiskArchival =
+conf.getBoolean(DFSConfigKeys.DFS_DATANODE_ALLOW_SAME_DISK_TIERING,
+DFSConfigKeys.DFS_DATANODE_ALLOW_SAME_DISK_TIERING_DEFAULT);
+if (enableSameDiskArchival) {
+  this.mount = usage.getMount();
+  reservedForArchive = conf.getDouble(

Review comment:
   Yeah, it's a good point. The reason I put it this way is to make 
configuration less verbose for normal use cases that datanode only has one type 
of disk. Otherwise, users will need to tag all the disks which is less readable 
and easy to make mistakes.
   
   I think we can introduce additional config for the use case you mentioned 
later, to list out each volume and target ratio.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 490418)
Time Spent: 4h 20m  (was: 4h 10m)

> Allow configuring DISK/ARCHIVE storage types on same device mount
> -
>
> Key: HDFS-15548
> URL: https://issues.apache.org/jira/browse/HDFS-15548
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> We can allow configuring DISK/ARCHIVE storage types on the same device mount 
> on two separate directories.
> Users should be able to configure the capacity for each. Also, the datanode 
> usage report should report stats correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15598) ViewHDFS#canonicalizeUri should not be restricted to DFS only API.

2020-09-24 Thread Uma Maheswara Rao G (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201757#comment-17201757
 ] 

Uma Maheswara Rao G commented on HDFS-15598:


Hive Insert to a partition is failing. 

 
{code:java}
INFO  : TaskCounter_Reducer_2_OUTPUT_out_Reducer_2:
INFO  :    OUTPUT_RECORDS: 0
ERROR : Job Commit failed with exception 
'java.lang.UnsupportedOperationException(This API:canonicalizeUri is specific 
to DFS. Can't run on other fs:o3fs://bucket.vol.ozone1)'
java.lang.UnsupportedOperationException: This API:canonicalizeUri is specific 
to DFS. Can't run on other fs:o3fs://bucket.vol.ozone1
 at 
org.apache.hadoop.hdfs.ViewDistributedFileSystem.checkDFS(ViewDistributedFileSystem.java:402)
 at 
org.apache.hadoop.hdfs.ViewDistributedFileSystem.canonicalizeUri(ViewDistributedFileSystem.java:1086)
 at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:761)
 at org.apache.hadoop.fs.FileSystem.makeQualified(FileSystem.java:629)
 at 
org.apache.hadoop.hive.ql.exec.Utilities.handleDirectInsertTableFinalPath(Utilities.java:4602)
 at 
org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1470)
 at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:798)
 at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:803)
 at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:803)
 at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:803)
 at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:803)
 at org.apache.hadoop.hive.ql.exec.tez.TezTask.close(TezTask.java:627)
 at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:342)
 at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:213)
 at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:105)
 at org.apache.hadoop.hive.ql.Executor.launchTask(Executor.java:357)
 at org.apache.hadoop.hive.ql.Executor.launchTasks(Executor.java:330)
 at org.apache.hadoop.hive.ql.Executor.runTasks(Executor.java:246)
 at org.apache.hadoop.hive.ql.Executor.execute(Executor.java:109)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:730)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:490)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:484)
 at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:166)
 at 
org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:225)
 at 
org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
 at 
org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:322)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
 at 
org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:340)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
ERROR : FAILED: Execution Error, return code 3 from 
org.apache.hadoop.hive.ql.exec.tez.TezTask
INFO  : Completed executing 
command(queryId=hive_20200924190718_d1092cd8-6b63-4374-ba1b-1f6df8212f30); Time 
taken: 14.705 seconds
INFO  : OK
{code}
 

> ViewHDFS#canonicalizeUri should not be restricted to DFS only API.
> --
>
> Key: HDFS-15598
> URL: https://issues.apache.org/jira/browse/HDFS-15598
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.4.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>
> As part of HIve Partitions verification, insert failed due to canonicalizeUri 
> restricted to DFS only. This can be relaxed and delegate to 
> vfs#canonicalizeUri



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15598) ViewHDFS#canonicalizeUri should not be restricted to DFS only API.

2020-09-24 Thread Uma Maheswara Rao G (Jira)
Uma Maheswara Rao G created HDFS-15598:
--

 Summary: ViewHDFS#canonicalizeUri should not be restricted to DFS 
only API.
 Key: HDFS-15598
 URL: https://issues.apache.org/jira/browse/HDFS-15598
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 3.4.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G


As part of HIve Partitions verification, insert failed due to canonicalizeUri 
restricted to DFS only. This can be relaxed and delegate to vfs#canonicalizeUri



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15591) RBF: Fix webHdfs file display error

2020-09-24 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201718#comment-17201718
 ] 

Ayush Saxena commented on HDFS-15591:
-

It is showing for me as well.
[~elgoiri] I tried creating just one mount entry like :

/test/123 --> ns1 /testdir 

and the slash was there

 !RBF_Browse_Directory.png! 


[~wangzhaohui] the test failures are related, you can't change the argument for 
getMountStatus, Instead try just changing the {{uPath}} of the received, 
{{HdfsLocatedFileStatus}} to just {{child}} 

> RBF: Fix webHdfs file display error
> ---
>
> Key: HDFS-15591
> URL: https://issues.apache.org/jira/browse/HDFS-15591
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-15591-001.patch, HDFS-15591-002.patch, 
> RBF_Browse_Directory.png, after-1.jpg, after-2.jpg, before-1.jpg, before-2.jpg
>
>
> The path mounted by the router does not exist on NN,router will  create 
> virtual folder with the mount name, but the "browse the file syaytem" display 
> on http is wrong. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15591) RBF: Fix webHdfs file display error

2020-09-24 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15591:

Attachment: RBF_Browse_Directory.png

> RBF: Fix webHdfs file display error
> ---
>
> Key: HDFS-15591
> URL: https://issues.apache.org/jira/browse/HDFS-15591
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-15591-001.patch, HDFS-15591-002.patch, 
> RBF_Browse_Directory.png, after-1.jpg, after-2.jpg, before-1.jpg, before-2.jpg
>
>
> The path mounted by the router does not exist on NN,router will  create 
> virtual folder with the mount name, but the "browse the file syaytem" display 
> on http is wrong. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15594) Lazy calculate live datanodes in safe mode tip

2020-09-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15594?focusedWorklogId=490313=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-490313
 ]

ASF GitHub Bot logged work on HDFS-15594:
-

Author: ASF GitHub Bot
Created on: 24/Sep/20 16:42
Start Date: 24/Sep/20 16:42
Worklog Time Spent: 10m 
  Work Description: goiri commented on a change in pull request #2332:
URL: https://github.com/apache/hadoop/pull/2332#discussion_r494462104



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerSafeMode.java
##
@@ -309,16 +311,21 @@ String getSafeModeTip() {
 }
 
 if (datanodeThreshold > 0) {
-  int numLive = blockManager.getDatanodeManager().getNumLiveDataNodes();
-  if (numLive < datanodeThreshold) {
-msg += String.format(
-"The number of live datanodes %d needs an additional %d live "
-+ "datanodes to reach the minimum number %d.%n",
-numLive, (datanodeThreshold - numLive), datanodeThreshold);
+  if (isBlockThresholdMet) {
+int numLive = blockManager.getDatanodeManager().getNumLiveDataNodes();
+if (numLive < datanodeThreshold) {
+  msg += String.format(
+  "The number of live datanodes %d needs an additional %d live "
+  + "datanodes to reach the minimum number %d.%n",
+  numLive, (datanodeThreshold - numLive), datanodeThreshold);
+} else {
+  msg += String.format("The number of live datanodes %d has reached "
+  + "the minimum number %d. ",
+  numLive, datanodeThreshold);
+}
   } else {
-msg += String.format("The number of live datanodes %d has reached "
-+ "the minimum number %d. ",
-numLive, datanodeThreshold);
+msg += "The number of live datanodes is not calculated " +

Review comment:
   As we are at it, does it make sense to use StringBuilder?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 490313)
Time Spent: 20m  (was: 10m)

> Lazy calculate live datanodes in safe mode tip
> --
>
> Key: HDFS-15594
> URL: https://issues.apache.org/jira/browse/HDFS-15594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Ye Ni
>Assignee: Ye Ni
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Safe mode tip is printed every 20 seconds.
> This change is to calculate live datanodes until reported block threshold is 
> meet.
>  Old 
> {code:java}
> STATE* Safe mode ON. The reported blocks 111054015 needs additional 27902753 
> blocks to reach the threshold 0.9990 of total blocks 139095856. The number of 
> live datanodes 2531 has reached the minimum number 1. Safe mode will be 
> turned off automatically once the thresholds have been reached.{code}
> New 
> {code:java}
> STATE* Safe mode ON. 
> The reported blocks 134851250 needs additional 3218494 blocks to reach the 
> threshold 0.9990 of total blocks 138207947.
> The number of live datanodes is not calculated since reported blocks hasn't 
> reached the threshold. Safe mode will be turned off automatically once the 
> thresholds have been reached.{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15594) Lazy calculate live datanodes in safe mode tip

2020-09-24 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201647#comment-17201647
 ] 

Íñigo Goiri commented on HDFS-15594:


Thanks [~hexiaoqiao] for the feedback.
I agree this should be fine as this is just for UI purposes.

> Lazy calculate live datanodes in safe mode tip
> --
>
> Key: HDFS-15594
> URL: https://issues.apache.org/jira/browse/HDFS-15594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Ye Ni
>Assignee: Ye Ni
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Safe mode tip is printed every 20 seconds.
> This change is to calculate live datanodes until reported block threshold is 
> meet.
>  Old 
> {code:java}
> STATE* Safe mode ON. The reported blocks 111054015 needs additional 27902753 
> blocks to reach the threshold 0.9990 of total blocks 139095856. The number of 
> live datanodes 2531 has reached the minimum number 1. Safe mode will be 
> turned off automatically once the thresholds have been reached.{code}
> New 
> {code:java}
> STATE* Safe mode ON. 
> The reported blocks 134851250 needs additional 3218494 blocks to reach the 
> threshold 0.9990 of total blocks 138207947.
> The number of live datanodes is not calculated since reported blocks hasn't 
> reached the threshold. Safe mode will be turned off automatically once the 
> thresholds have been reached.{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15591) RBF: Fix webHdfs file display error

2020-09-24 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201642#comment-17201642
 ] 

Íñigo Goiri commented on HDFS-15591:


I think this actually broke the tests.
There is also something weird here.
I have the same kind of mount points but they don't show with the slash at the 
beginning.
[~ayushtkn] do you see this?

> RBF: Fix webHdfs file display error
> ---
>
> Key: HDFS-15591
> URL: https://issues.apache.org/jira/browse/HDFS-15591
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-15591-001.patch, HDFS-15591-002.patch, after-1.jpg, 
> after-2.jpg, before-1.jpg, before-2.jpg
>
>
> The path mounted by the router does not exist on NN,router will  create 
> virtual folder with the mount name, but the "browse the file syaytem" display 
> on http is wrong. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15596) ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, progress, checksumOpt) should not be restricted to DFS only.

2020-09-24 Thread Uma Maheswara Rao G (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G resolved HDFS-15596.

   Fix Version/s: 3.4.0
Hadoop Flags: Reviewed
Target Version/s: 3.3.1
  Resolution: Fixed

Thanks [~ayushtkn] for the review! Committed.

> ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, 
> progress, checksumOpt) should not be restricted to DFS only.
> ---
>
> Key: HDFS-15596
> URL: https://issues.apache.org/jira/browse/HDFS-15596
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The ViewHDFS#create(f, permission, cflags, bufferSize, replication, 
> blockSize, progress, checksumOpt) API already available in FileSystem. It 
> will use other overloaded API and finally can go to ViewFileSystem. This case 
> works in regular ViewFileSystem also. With ViewHDFS, we restricted this to 
> DFS only which cause discp to fail when target is non hdfs as it's using this 
> API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15596) ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, progress, checksumOpt) should not be restricted to DFS only.

2020-09-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15596?focusedWorklogId=490225=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-490225
 ]

ASF GitHub Bot logged work on HDFS-15596:
-

Author: ASF GitHub Bot
Created on: 24/Sep/20 14:08
Start Date: 24/Sep/20 14:08
Worklog Time Spent: 10m 
  Work Description: umamaheswararao merged pull request #2333:
URL: https://github.com/apache/hadoop/pull/2333


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 490225)
Time Spent: 0.5h  (was: 20m)

> ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, 
> progress, checksumOpt) should not be restricted to DFS only.
> ---
>
> Key: HDFS-15596
> URL: https://issues.apache.org/jira/browse/HDFS-15596
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The ViewHDFS#create(f, permission, cflags, bufferSize, replication, 
> blockSize, progress, checksumOpt) API already available in FileSystem. It 
> will use other overloaded API and finally can go to ViewFileSystem. This case 
> works in regular ViewFileSystem also. With ViewHDFS, we restricted this to 
> DFS only which cause discp to fail when target is non hdfs as it's using this 
> API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15596) ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, progress, checksumOpt) should not be restricted to DFS only.

2020-09-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15596?focusedWorklogId=490223=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-490223
 ]

ASF GitHub Bot logged work on HDFS-15596:
-

Author: ASF GitHub Bot
Created on: 24/Sep/20 14:06
Start Date: 24/Sep/20 14:06
Worklog Time Spent: 10m 
  Work Description: umamaheswararao commented on pull request #2333:
URL: https://github.com/apache/hadoop/pull/2333#issuecomment-698367707


   Jenkins Run:
   
https://issues.apache.org/jira/browse/HDFS-15596?focusedCommentId=17201381=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17201381
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 490223)
Time Spent: 20m  (was: 10m)

> ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, 
> progress, checksumOpt) should not be restricted to DFS only.
> ---
>
> Key: HDFS-15596
> URL: https://issues.apache.org/jira/browse/HDFS-15596
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The ViewHDFS#create(f, permission, cflags, bufferSize, replication, 
> blockSize, progress, checksumOpt) API already available in FileSystem. It 
> will use other overloaded API and finally can go to ViewFileSystem. This case 
> works in regular ViewFileSystem also. With ViewHDFS, we restricted this to 
> DFS only which cause discp to fail when target is non hdfs as it's using this 
> API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15591) RBF: Fix webHdfs file display error

2020-09-24 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201479#comment-17201479
 ] 

Hadoop QA commented on HDFS-15591:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
52s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 1 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
26s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 57s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
25s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs 
config; considering switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | 
{color:orange}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/204/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt{color}
 | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 2 
new + 5 unchanged - 0 fixed = 7 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 13s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | 

[jira] [Work logged] (HDFS-15593) Hadoop - Upgrade to JQuery 3.5.1

2020-09-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15593?focusedWorklogId=490150=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-490150
 ]

ASF GitHub Bot logged work on HDFS-15593:
-

Author: ASF GitHub Bot
Created on: 24/Sep/20 11:39
Start Date: 24/Sep/20 11:39
Worklog Time Spent: 10m 
  Work Description: aryangupta1998 commented on pull request #2330:
URL: https://github.com/apache/hadoop/pull/2330#issuecomment-698289519


   
   Thanks, @tasanuma for the review. I have addressed the comments. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 490150)
Time Spent: 0.5h  (was: 20m)

> Hadoop - Upgrade to JQuery 3.5.1
> 
>
> Key: HDFS-15593
> URL: https://issues.apache.org/jira/browse/HDFS-15593
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Aryan Gupta
>Assignee: Aryan Gupta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> jQuery version is being upgraded from jquery-3.4.1.min.js to 
> jquery-3.5.1.min.js



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15591) RBF: Fix webHdfs file display error

2020-09-24 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201459#comment-17201459
 ] 

Hadoop QA commented on HDFS-15591:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 1 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 0s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 59s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
11s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs 
config; considering switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | 
{color:orange}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/203/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt{color}
 | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 2 
new + 5 unchanged - 0 fixed = 7 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 53s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | 

[jira] [Work logged] (HDFS-15593) Hadoop - Upgrade to JQuery 3.5.1

2020-09-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15593?focusedWorklogId=490142=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-490142
 ]

ASF GitHub Bot logged work on HDFS-15593:
-

Author: ASF GitHub Bot
Created on: 24/Sep/20 11:18
Start Date: 24/Sep/20 11:18
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on pull request #2330:
URL: https://github.com/apache/hadoop/pull/2330#issuecomment-698280650


   Thanks for working on this, @aryangupta1998.
   
   There are still some codes using jquery-3.4.1. Could you also fix them?
   ```
   $ find . -type f | grep -v target | xargs grep 'jquery-3.4' 2> /dev/null
   ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml:
src/main/resources/webapps/static/jquery/jquery-3.4.1.min.js
   
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/JQueryUI.java:
.script(root_url("static/jquery/jquery-3.4.1.min.js"))
   
./LICENSE-binary:hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery-3.4.1.min.js
   

[jira] [Commented] (HDFS-15569) Speed up the Storage#doRecover during datanode rolling upgrade

2020-09-24 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201450#comment-17201450
 ] 

Stephen O'Donnell commented on HDFS-15569:
--

If there are multiple restarts of the DN, you would get current.tmp after the 
first restart. Then the second restart would need to wait on it to be deleted.

Do you think it would be better, to rename the folder to current. 
and then the async delete thread would simply delete all folders named in that 
pattern one by one?

This delete may have an impact on the disks while the upgrade step is 
attempting to create the hardlinks in the new directory, as the delete will be 
fighting for disk bandwidth too. I wonder if this delete would be better 
delayed until after the hard link creation has completed? One possible negative 
of this, is that the overhead of the delete is postponed until when the DN is 
actually in service, which might impact workloads.

> Speed up the Storage#doRecover during datanode rolling upgrade 
> ---
>
> Key: HDFS-15569
> URL: https://issues.apache.org/jira/browse/HDFS-15569
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Major
> Attachments: HDFS-15569.001.patch, HDFS-15569.002.patch, 
> HDFS-15569.003.patch
>
>
> When upgrading datanode from hadoop 2.7.2 to 3.1.1 , because of jvm not 
> having enough memory upgrade failed , Adjusted memory configurations and re 
> upgraded datanode ,
> Now datanode upgrade has taken more time , on analyzing found that 
> Storage#deleteDir has taken more time in RECOVER_UPGRADE state 
> {code:java}
> "Thread-28" #270 daemon prio=5 os_prio=0 tid=0x7fed5a9b8000 nid=0x2b5c 
> runnable [0x7fdcdad2a000]"Thread-28" #270 daemon prio=5 os_prio=0 
> tid=0x7fed5a9b8000 nid=0x2b5c runnable [0x7fdcdad2a000]   
> java.lang.Thread.State: RUNNABLE at java.io.UnixFileSystem.delete0(Native 
> Method) at java.io.UnixFileSystem.delete(UnixFileSystem.java:265) at 
> java.io.File.delete(File.java:1041) at 
> org.apache.hadoop.fs.FileUtil.deleteImpl(FileUtil.java:229) at 
> org.apache.hadoop.fs.FileUtil.fullyDeleteContents(FileUtil.java:270) at 
> org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:182) at 
> org.apache.hadoop.fs.FileUtil.fullyDeleteContents(FileUtil.java:285) at 
> org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:182) at 
> org.apache.hadoop.fs.FileUtil.fullyDeleteContents(FileUtil.java:285) at 
> org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:182) at 
> org.apache.hadoop.fs.FileUtil.fullyDeleteContents(FileUtil.java:285) at 
> org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:182) at 
> org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:153) at 
> org.apache.hadoop.hdfs.server.common.Storage.deleteDir(Storage.java:1348) at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.doRecover(Storage.java:782)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadStorageDirectory(BlockPoolSliceStorage.java:174)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadBpStorageDirectories(BlockPoolSliceStorage.java:224)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.recoverTransitionRead(BlockPoolSliceStorage.java:253)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.loadBlockPoolSliceStorage(DataStorage.java:455)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:389)
>  - locked <0x7fdf08ec7548> (a 
> org.apache.hadoop.hdfs.server.datanode.DataStorage) at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:557)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1761)
>  - locked <0x7fdf08ec7598> (a 
> org.apache.hadoop.hdfs.server.datanode.DataNode) at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1697)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:392)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:282)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822)
>  at java.lang.Thread.run(Thread.java:748) {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15591) RBF: Fix webHdfs file display error

2020-09-24 Thread wangzhaohui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangzhaohui updated HDFS-15591:
---
Attachment: HDFS-15591-002.patch

> RBF: Fix webHdfs file display error
> ---
>
> Key: HDFS-15591
> URL: https://issues.apache.org/jira/browse/HDFS-15591
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-15591-001.patch, HDFS-15591-002.patch, after-1.jpg, 
> after-2.jpg, before-1.jpg, before-2.jpg
>
>
> The path mounted by the router does not exist on NN,router will  create 
> virtual folder with the mount name, but the "browse the file syaytem" display 
> on http is wrong. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15591) RBF: Fix webHdfs file display error

2020-09-24 Thread wangzhaohui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangzhaohui updated HDFS-15591:
---
Attachment: (was: HDFS-15591-002.patch)

> RBF: Fix webHdfs file display error
> ---
>
> Key: HDFS-15591
> URL: https://issues.apache.org/jira/browse/HDFS-15591
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-15591-001.patch, after-1.jpg, after-2.jpg, 
> before-1.jpg, before-2.jpg
>
>
> The path mounted by the router does not exist on NN,router will  create 
> virtual folder with the mount name, but the "browse the file syaytem" display 
> on http is wrong. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15591) RBF: Fix webHdfs file display error

2020-09-24 Thread wangzhaohui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangzhaohui updated HDFS-15591:
---
Attachment: HDFS-15591-002.patch

> RBF: Fix webHdfs file display error
> ---
>
> Key: HDFS-15591
> URL: https://issues.apache.org/jira/browse/HDFS-15591
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-15591-001.patch, HDFS-15591-002.patch, after-1.jpg, 
> after-2.jpg, before-1.jpg, before-2.jpg
>
>
> The path mounted by the router does not exist on NN,router will  create 
> virtual folder with the mount name, but the "browse the file syaytem" display 
> on http is wrong. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15569) Speed up the Storage#doRecover during datanode rolling upgrade

2020-09-24 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201411#comment-17201411
 ] 

Hadoop QA commented on HDFS-15569:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
9s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
41s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 22s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m  
4s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; 
considering switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
2s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
10s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green}{color} | {color:green} 
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 64 unchanged - 3 
fixed = 64 total (was 67) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 57s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | 

[jira] [Commented] (HDFS-15596) ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, progress, checksumOpt) should not be restricted to DFS only.

2020-09-24 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201381#comment-17201381
 ] 

Hadoop QA commented on HDFS-15596:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
8s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 33m 
17s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 53s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
28s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs 
config; considering switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
26s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 42s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green}{color} | {color:green} the patch passed with JDK 

[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=490068=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-490068
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 24/Sep/20 08:57
Start Date: 24/Sep/20 08:57
Worklog Time Spent: 10m 
  Work Description: brahmareddybattula merged pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 490068)
Time Spent: 11.5h  (was: 11h 20m)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=490066=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-490066
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 24/Sep/20 08:55
Start Date: 24/Sep/20 08:55
Worklog Time Spent: 10m 
  Work Description: brahmareddybattula edited a comment on pull request 
#2189:
URL: https://github.com/apache/hadoop/pull/2189#issuecomment-698212333


   @huangtianhua and @YaYun-Wang  thanks for contribution. @liuml07  thanks for 
review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 490066)
Time Spent: 11h 20m  (was: 11h 10m)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 11h 20m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=490065=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-490065
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 24/Sep/20 08:55
Start Date: 24/Sep/20 08:55
Worklog Time Spent: 10m 
  Work Description: brahmareddybattula commented on pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#issuecomment-698212333


   @huangtianhua and @YaYun-Wang  thanks for review. @liuml07  thanks for 
review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 490065)
Time Spent: 11h 10m  (was: 11h)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 11h 10m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15595) TestSnapshotCommands.testMaxSnapshotLimit fails in trunk

2020-09-24 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee resolved HDFS-15595.

Fix Version/s: 3.4.0
   Resolution: Fixed

> TestSnapshotCommands.testMaxSnapshotLimit fails in trunk
> 
>
> Key: HDFS-15595
> URL: https://issues.apache.org/jira/browse/HDFS-15595
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, snapshots, test
>Reporter: Mingliang Liu
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.4.0
>
>
> See 
> [this|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/1/testReport/org.apache.hadoop.hdfs/TestSnapshotCommands/testMaxSnapshotLimit/]
>  for a sample error.
> Sample error stack:
> {quote}
> Error Message
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
> Stacktrace
> java.lang.AssertionError: 
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.apache.hadoop.hdfs.DFSTestUtil.toolRun(DFSTestUtil.java:1934)
>   at org.apache.hadoop.hdfs.DFSTestUtil.FsShellRun(DFSTestUtil.java:1942)
>   at 
> org.apache.hadoop.hdfs.TestSnapshotCommands.testMaxSnapshotLimit(TestSnapshotCommands.java:130)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {quote}
> I can also reproduce this locally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15590) namenode fails to start when ordered snapshot deletion feature is disabled

2020-09-24 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee resolved HDFS-15590.

Resolution: Fixed

> namenode fails to start when ordered snapshot deletion feature is disabled
> --
>
> Key: HDFS-15590
> URL: https://issues.apache.org/jira/browse/HDFS-15590
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {code:java}
> 1. Enabled ordered deletion snapshot feature.
> 2. Created snapshottable directory - /user/hrt_6/atrr_dir1
> 3. Created snapshots s0, s1, s2.
> 4. Deleted snapshot s2
> 5. Delete snapshot s0, s1, s2 again
> 6. Disable ordered deletion snapshot feature
> 5. Restart Namenode
> Failed to start namenode.
> org.apache.hadoop.hdfs.protocol.SnapshotException: Cannot delete snapshot s2 
> from path /user/hrt_6/atrr_dir2: the snapshot does not exist.
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.removeSnapshot(DirectorySnapshottableFeature.java:237)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.removeSnapshot(INodeDirectory.java:293)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:510)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:819)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:182)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:912)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:760)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1164)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:755)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:646)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:717)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:960)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:933)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1670)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1737)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15590) namenode fails to start when ordered snapshot deletion feature is disabled

2020-09-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15590?focusedWorklogId=490053=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-490053
 ]

ASF GitHub Bot logged work on HDFS-15590:
-

Author: ASF GitHub Bot
Created on: 24/Sep/20 08:31
Start Date: 24/Sep/20 08:31
Worklog Time Spent: 10m 
  Work Description: bshashikant merged pull request #2326:
URL: https://github.com/apache/hadoop/pull/2326


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 490053)
Time Spent: 1h 10m  (was: 1h)

> namenode fails to start when ordered snapshot deletion feature is disabled
> --
>
> Key: HDFS-15590
> URL: https://issues.apache.org/jira/browse/HDFS-15590
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {code:java}
> 1. Enabled ordered deletion snapshot feature.
> 2. Created snapshottable directory - /user/hrt_6/atrr_dir1
> 3. Created snapshots s0, s1, s2.
> 4. Deleted snapshot s2
> 5. Delete snapshot s0, s1, s2 again
> 6. Disable ordered deletion snapshot feature
> 5. Restart Namenode
> Failed to start namenode.
> org.apache.hadoop.hdfs.protocol.SnapshotException: Cannot delete snapshot s2 
> from path /user/hrt_6/atrr_dir2: the snapshot does not exist.
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.removeSnapshot(DirectorySnapshottableFeature.java:237)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.removeSnapshot(INodeDirectory.java:293)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:510)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:819)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:182)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:912)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:760)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1164)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:755)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:646)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:717)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:960)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:933)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1670)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1737)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15590) namenode fails to start when ordered snapshot deletion feature is disabled

2020-09-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15590?focusedWorklogId=490055=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-490055
 ]

ASF GitHub Bot logged work on HDFS-15590:
-

Author: ASF GitHub Bot
Created on: 24/Sep/20 08:31
Start Date: 24/Sep/20 08:31
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #2326:
URL: https://github.com/apache/hadoop/pull/2326#issuecomment-698199742


   Thanks @szetszwo for the review. I have committed this.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 490055)
Time Spent: 1h 20m  (was: 1h 10m)

> namenode fails to start when ordered snapshot deletion feature is disabled
> --
>
> Key: HDFS-15590
> URL: https://issues.apache.org/jira/browse/HDFS-15590
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {code:java}
> 1. Enabled ordered deletion snapshot feature.
> 2. Created snapshottable directory - /user/hrt_6/atrr_dir1
> 3. Created snapshots s0, s1, s2.
> 4. Deleted snapshot s2
> 5. Delete snapshot s0, s1, s2 again
> 6. Disable ordered deletion snapshot feature
> 5. Restart Namenode
> Failed to start namenode.
> org.apache.hadoop.hdfs.protocol.SnapshotException: Cannot delete snapshot s2 
> from path /user/hrt_6/atrr_dir2: the snapshot does not exist.
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.removeSnapshot(DirectorySnapshottableFeature.java:237)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.removeSnapshot(INodeDirectory.java:293)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:510)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:819)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:182)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:912)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:760)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1164)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:755)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:646)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:717)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:960)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:933)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1670)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1737)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15415) Reduce locking in Datanode DirectoryScanner

2020-09-24 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201342#comment-17201342
 ] 

Stephen O'Donnell commented on HDFS-15415:
--

Thanks [~weichiu] - I ran both the failing test classes locally (via Intellij) 
a few times and they worked fairly quickly on all runs. No failures or 
timeouts. Also the previous Yetus run for the 001 patch run had different test 
failures. The only changes in 002 are to correct style issues. I think these 
failures are nothing to worry about.

> Reduce locking in Datanode DirectoryScanner
> ---
>
> Key: HDFS-15415
> URL: https://issues.apache.org/jira/browse/HDFS-15415
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15415.001.patch, HDFS-15415.002.patch, 
> HDFS-15415.003.patch, HDFS-15415.004.patch, HDFS-15415.005.patch, 
> HDFS-15415.branch-3.2.001.patch, HDFS-15415.branch-3.2.002.patch, 
> HDFS-15415.branch-3.3.001.patch
>
>
> In HDFS-15406, we have a small change to greatly reduce the runtime and 
> locking time of the datanode DirectoryScanner. They may be room for further 
> improvement.
> From the scan step, we have captured a snapshot of what is on disk. After 
> calling `dataset.getFinalizedBlocks(bpid);` we have taken a snapshot of in 
> memory. The two snapshots are never 100% in sync as things are always 
> changing as the disk is scanned.
> We are only comparing finalized blocks, so they should not really change:
> * If a block is deleted after our snapshot, our snapshot will not see it and 
> that is OK.
> * A finalized block could be appended. If that happens both the genstamp and 
> length will change, but that should be handled by reconcile when it calls 
> `FSDatasetImpl.checkAndUpdate()`, and there is nothing stopping blocks being 
> appended after they have been scanned from disk, but before they have been 
> compared with memory.
> My suspicion is that we can do all the comparison work outside of the lock 
> and checkAndUpdate() re-checks any differences later under the lock on a 
> block by block basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15596) ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, progress, checksumOpt) should not be restricted to DFS only.

2020-09-24 Thread Uma Maheswara Rao G (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201340#comment-17201340
 ] 

Uma Maheswara Rao G commented on HDFS-15596:


After the fix distCp ran successfully:



 
{code:java}
[root@uma-1 /]# sudo -u hdfs hadoop distcp /test /OzoneTest
20/09/24 06:56:09 INFO tools.DistCp: Input Options: 
DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
useRdiff=false, fromSnapshot=null, toSnapshot=null, skipCRC=false, 
blocking=true, numListstatusThreads=0, maxMaps=20, mapBandwidth=0.0, 
copyStrategy='uniformsize', preserveStatus=[], atomicWorkPath=null, 
logPath=null, sourceFileListing=null, sourcePaths=[/test], 
targetPath=/OzoneTest, filtersFile='null', blocksPerChunk=0, 
copyBufferSize=8192, verboseLog=false, directWrite=false}, sourcePaths=[/test], 
targetPathExists=true, preserveRawXattrsfalse
20/09/24 06:56:10 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 2; 
dirCnt = 1
20/09/24 06:56:10 INFO tools.SimpleCopyListing: Build file listing completed.
20/09/24 06:56:10 INFO Configuration.deprecation: io.sort.mb is deprecated. 
Instead, use mapreduce.task.io.sort.mb
20/09/24 06:56:10 INFO Configuration.deprecation: io.sort.factor is deprecated. 
Instead, use mapreduce.task.io.sort.factor
20/09/24 06:56:10 INFO tools.DistCp: Number of paths in the copy list: 2
20/09/24 06:56:10 INFO tools.DistCp: Number of paths in the copy list: 2
20/09/24 06:56:10 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm23
20/09/24 06:56:10 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding 
for path: /user/hdfs/.staging/job_1600930164279_0009
20/09/24 06:56:10 INFO mapreduce.JobSubmitter: number of splits:2
20/09/24 06:56:11 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1600930164279_0009
20/09/24 06:56:11 INFO mapreduce.JobSubmitter: Executing with tokens: []
20/09/24 06:56:11 INFO conf.Configuration: resource-types.xml not found
20/09/24 06:56:11 INFO resource.ResourceUtils: Unable to find 
'resource-types.xml'.
20/09/24 06:56:11 INFO impl.YarnClientImpl: Submitted application 
application_1600930164279_0009
20/09/24 06:56:11 INFO mapreduce.Job: The url to track the job: 
http://uma-2.uma.xxx.xxx.xxx:8088/proxy/application_1600930164279_0009/
20/09/24 06:56:11 INFO tools.DistCp: DistCp job-id: job_1600930164279_0009
20/09/24 06:56:11 INFO mapreduce.Job: Running job: job_1600930164279_0009
20/09/24 06:56:22 INFO mapreduce.Job: Job job_1600930164279_0009 running in 
uber mode : false
20/09/24 06:56:22 INFO mapreduce.Job:  map 0% reduce 0%
20/09/24 06:56:28 INFO mapreduce.Job:  map 50% reduce 0%
20/09/24 06:56:34 INFO mapreduce.Job:  map 100% reduce 0%
20/09/24 06:56:34 INFO mapreduce.Job: Job job_1600930164279_0009 completed 
successfully
20/09/24 06:56:35 INFO mapreduce.Job: Counters: 43
 File System Counters
 FILE: Number of bytes read=0
 FILE: Number of bytes written=583482
 FILE: Number of read operations=0
 FILE: Number of large read operations=0
 FILE: Number of write operations=0
 HDFS: Number of bytes read=865
 HDFS: Number of bytes written=0
 HDFS: Number of read operations=20
 HDFS: Number of large read operations=0
 HDFS: Number of write operations=4
 HDFS: Number of bytes read erasure-coded=0
 O3FS: Number of bytes read=0
 O3FS: Number of bytes written=17
 O3FS: Number of read operations=12
 O3FS: Number of large read operations=0
 O3FS: Number of write operations=3
 Job Counters 
 Launched map tasks=2
 Other local map tasks=2
 Total time spent by all maps in occupied slots (ms)=13070
 Total time spent by all reduces in occupied slots (ms)=0
 Total time spent by all map tasks (ms)=13070
 Total vcore-milliseconds taken by all map tasks=13070
 Total megabyte-milliseconds taken by all map tasks=13383680
 Map-Reduce Framework
 Map input records=2
 Map output records=0
 Input split bytes=228
 Spilled Records=0
 Failed Shuffles=0
 Merged Map outputs=0
 GC time elapsed (ms)=285
 CPU time spent (ms)=3030
 Physical memory (bytes) snapshot=804298752
 Virtual memory (bytes) snapshot=5346963456
 Total committed heap usage (bytes)=729284608
 Peak Map Physical memory (bytes)=428199936
 Peak Map Virtual memory (bytes)=2674827264
 File Input Format Counters 
 Bytes Read=620
 File Output Format Counters 
 Bytes Written=0
 DistCp Counters
 Bandwidth in Btyes=17
 Bytes Copied=17
 Bytes Expected=17
 Files Copied=1
 DIR_COPY=1
{code}
 

 

> ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, 
> progress, checksumOpt) should not be restricted to DFS only.
> ---
>
> Key: HDFS-15596
> URL: https://issues.apache.org/jira/browse/HDFS-15596
> Project: Hadoop HDFS
>  Issue Type: 

[jira] [Commented] (HDFS-15596) ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, progress, checksumOpt) should not be restricted to DFS only.

2020-09-24 Thread Uma Maheswara Rao G (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201336#comment-17201336
 ] 

Uma Maheswara Rao G commented on HDFS-15596:


distcp fails with the following error:


{code:java}
Error: java.io.IOException: File copy failed: hdfs://ns1/test/test.txt --> 
hdfs://ns1/OzoneTest/test/test.txt at 
org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
 at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:219) at 
org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:48) at 
org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at 
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at 
org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at 
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168) Caused by: 
java.io.IOException: Couldn't run retriable-command: Copying 
hdfs://ns1/test/test.txt to hdfs://ns1/OzoneTest/test/test.txt at 
org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
 at 
org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258)
 ... 10 more Caused by: java.lang.UnsupportedOperationException: This 
API:create is specific to DFS. Can't run on other fs:o3fs://bucket.vol.ozone1 
at 
org.apache.hadoop.hdfs.ViewDistributedFileSystem.checkDFS(ViewDistributedFileSystem.java:402)
 at 
org.apache.hadoop.hdfs.ViewDistributedFileSystem.create(ViewDistributedFileSystem.java:391)
 at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToFile(RetriableFileCopyCommand.java:201)
 at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:143)
 at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:115)
 at 
org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87) 
... 11 more{code}

> ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, 
> progress, checksumOpt) should not be restricted to DFS only.
> ---
>
> Key: HDFS-15596
> URL: https://issues.apache.org/jira/browse/HDFS-15596
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The ViewHDFS#create(f, permission, cflags, bufferSize, replication, 
> blockSize, progress, checksumOpt) API already available in FileSystem. It 
> will use other overloaded API and finally can go to ViewFileSystem. This case 
> works in regular ViewFileSystem also. With ViewHDFS, we restricted this to 
> DFS only which cause discp to fail when target is non hdfs as it's using this 
> API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15596) ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, progress, checksumOpt) should not be restricted to DFS only.

2020-09-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15596?focusedWorklogId=490030=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-490030
 ]

ASF GitHub Bot logged work on HDFS-15596:
-

Author: ASF GitHub Bot
Created on: 24/Sep/20 07:38
Start Date: 24/Sep/20 07:38
Worklog Time Spent: 10m 
  Work Description: umamaheswararao opened a new pull request #2333:
URL: https://github.com/apache/hadoop/pull/2333


   https://issues.apache.org/jira/browse/HDFS-15596



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 490030)
Remaining Estimate: 0h
Time Spent: 10m

> ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, 
> progress, checksumOpt) should not be restricted to DFS only.
> ---
>
> Key: HDFS-15596
> URL: https://issues.apache.org/jira/browse/HDFS-15596
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The ViewHDFS#create(f, permission, cflags, bufferSize, replication, 
> blockSize, progress, checksumOpt) API already available in FileSystem. It 
> will use other overloaded API and finally can go to ViewFileSystem. This case 
> works in regular ViewFileSystem also. With ViewHDFS, we restricted this to 
> DFS only which cause discp to fail when target is non hdfs as it's using this 
> API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15596) ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, progress, checksumOpt) should not be restricted to DFS only.

2020-09-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15596:
--
Labels: pull-request-available  (was: )

> ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, 
> progress, checksumOpt) should not be restricted to DFS only.
> ---
>
> Key: HDFS-15596
> URL: https://issues.apache.org/jira/browse/HDFS-15596
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The ViewHDFS#create(f, permission, cflags, bufferSize, replication, 
> blockSize, progress, checksumOpt) API already available in FileSystem. It 
> will use other overloaded API and finally can go to ViewFileSystem. This case 
> works in regular ViewFileSystem also. With ViewHDFS, we restricted this to 
> DFS only which cause discp to fail when target is non hdfs as it's using this 
> API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15548) Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15548?focusedWorklogId=490016=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-490016
 ]

ASF GitHub Bot logged work on HDFS-15548:
-

Author: ASF GitHub Bot
Created on: 24/Sep/20 07:09
Start Date: 24/Sep/20 07:09
Worklog Time Spent: 10m 
  Work Description: Hexiaoqiao commented on a change in pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#discussion_r494085423



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
##
@@ -412,16 +435,28 @@ long getBlockPoolUsed(String bpid) throws IOException {
*/
   @VisibleForTesting
   public long getCapacity() {
+long capacity;
 if (configuredCapacity < 0L) {
   long remaining;
   if (cachedCapacity > 0L) {
 remaining = cachedCapacity - getReserved();
   } else {
 remaining = usage.getCapacity() - getReserved();
   }
-  return Math.max(remaining, 0L);
+  capacity = Math.max(remaining, 0L);
+} else {
+  capacity = configuredCapacity;
+}
+
+if (enableSameDiskArchival) {

Review comment:
   The return value seems not expected as annotation says if enable this 
feature.
   > the capacity of the file system excluding space reserved for non-HDFS.
   
   IMO, the part for ARCHIVE should also be calculated. It seems be not 
differentiated by NameNode for DISK or ARCHIVE per storage of DataNode. Please 
correct if something wrong.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
##
@@ -452,7 +487,33 @@ public long getAvailable() throws IOException {
   }
 
   long getActualNonDfsUsed() throws IOException {
-return usage.getUsed() - getDfsUsed();
+// DISK and ARCHIVAL on same disk

Review comment:
   same confused as the last comment.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
##
@@ -190,6 +193,26 @@
 }
 this.conf = conf;
 this.fileIoProvider = fileIoProvider;
+this.enableSameDiskArchival =
+conf.getBoolean(DFSConfigKeys.DFS_DATANODE_ALLOW_SAME_DISK_TIERING,
+DFSConfigKeys.DFS_DATANODE_ALLOW_SAME_DISK_TIERING_DEFAULT);
+if (enableSameDiskArchival) {
+  this.mount = usage.getMount();
+  reservedForArchive = conf.getDouble(

Review comment:
   `reservedForArchive` try to define reserve for archive percentage. If 
there are heterogeneous disks located one node, do we need config them separate?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
##
@@ -62,9 +64,14 @@
   private final VolumeChoosingPolicy blockChooser;
   private final BlockScanner blockScanner;
 
+  private boolean enableSameDiskTiering;

Review comment:
   `enableSameDiskTiering` here vs `enableSameDiskArchival` at 
FsVolumeImpl,  we should unified variable name.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 490016)
Time Spent: 4h 10m  (was: 4h)

> Allow configuring DISK/ARCHIVE storage types on same device mount
> -
>
> Key: HDFS-15548
> URL: https://issues.apache.org/jira/browse/HDFS-15548
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> We can allow configuring DISK/ARCHIVE storage types on the same device mount 
> on two separate directories.
> Users should be able to configure the capacity for each. Also, the datanode 
> usage report should report stats correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13293) RBF: The RouterRPCServer should transfer CallerContext and client ip to NamenodeRpcServer

2020-09-24 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201287#comment-17201287
 ] 

Akira Ajisaka commented on HDFS-13293:
--

Go ahead. Thank you [~ferhui].
{quote}{noformat}
import org.apache.hadoop.ipc.ProtobufRpcEngine.Server; {noformat}{quote}

ProtobufRpcEngine is deprecated. Please use ProtobufRpcEngine2 instead.

> RBF: The RouterRPCServer should transfer CallerContext and client ip to 
> NamenodeRpcServer
> -
>
> Key: HDFS-13293
> URL: https://issues.apache.org/jira/browse/HDFS-13293
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: maobaolong
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-13293.001.patch
>
>
> Otherwise, the namenode don't know the client's callerContext



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org