[jira] [Commented] (HDFS-15415) Reduce locking in Datanode DirectoryScanner

2020-09-23 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201272#comment-17201272
 ] 

Wei-Chiu Chuang commented on HDFS-15415:


The code looks good. I wanted to verify that test failures are unrelated but 
one of the test kept failing on my local machine. Could you check again? Might 
be a local set up problem but just like to double check.

> Reduce locking in Datanode DirectoryScanner
> ---
>
> Key: HDFS-15415
> URL: https://issues.apache.org/jira/browse/HDFS-15415
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15415.001.patch, HDFS-15415.002.patch, 
> HDFS-15415.003.patch, HDFS-15415.004.patch, HDFS-15415.005.patch, 
> HDFS-15415.branch-3.2.001.patch, HDFS-15415.branch-3.2.002.patch, 
> HDFS-15415.branch-3.3.001.patch
>
>
> In HDFS-15406, we have a small change to greatly reduce the runtime and 
> locking time of the datanode DirectoryScanner. They may be room for further 
> improvement.
> From the scan step, we have captured a snapshot of what is on disk. After 
> calling `dataset.getFinalizedBlocks(bpid);` we have taken a snapshot of in 
> memory. The two snapshots are never 100% in sync as things are always 
> changing as the disk is scanned.
> We are only comparing finalized blocks, so they should not really change:
> * If a block is deleted after our snapshot, our snapshot will not see it and 
> that is OK.
> * A finalized block could be appended. If that happens both the genstamp and 
> length will change, but that should be handled by reconcile when it calls 
> `FSDatasetImpl.checkAndUpdate()`, and there is nothing stopping blocks being 
> appended after they have been scanned from disk, but before they have been 
> compared with memory.
> My suspicion is that we can do all the comparison work outside of the lock 
> and checkAndUpdate() re-checks any differences later under the lock on a 
> block by block basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=489976=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-489976
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 24/Sep/20 04:53
Start Date: 24/Sep/20 04:53
Worklog Time Spent: 10m 
  Work Description: brahmareddybattula commented on a change in pull 
request #2189:
URL: https://github.com/apache/hadoop/pull/2189#discussion_r494037751



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageType.java
##
@@ -34,28 +34,35 @@
 @InterfaceStability.Unstable
 public enum StorageType {
   // sorted by the speed of the storage types, from fast to slow
-  RAM_DISK(true),
-  SSD(false),
-  DISK(false),
-  ARCHIVE(false),
-  PROVIDED(false);
+  RAM_DISK(true, true),
+  NVDIMM(false, true),
+  SSD(false, false),
+  DISK(false, false),
+  ARCHIVE(false, false),
+  PROVIDED(false, false);
 
   private final boolean isTransient;
+  private final boolean isRAM;
 
   public static final StorageType DEFAULT = DISK;
 
   public static final StorageType[] EMPTY_ARRAY = {};
 
   private static final StorageType[] VALUES = values();
 
-  StorageType(boolean isTransient) {
+  StorageType(boolean isTransient, boolean isRAM) {
 this.isTransient = isTransient;
+this.isRAM = isRAM;
   }
 
   public boolean isTransient() {
 return isTransient;
   }
 
+  public boolean isRAM() {
+return isRAM;
+  }

Review comment:
   ok





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 489976)
Time Spent: 11h  (was: 10h 50m)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 11h
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15594) Lazy calculate live datanodes in safe mode tip

2020-09-23 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201259#comment-17201259
 ] 

Xiaoqiao He commented on HDFS-15594:


Thanks [~NickyYe] and [~elgoiri], it is useful works IMO. Just one concerns, 
will there be any confuses for end user if BlockThreshold never meet datanode 
threshold will not calculate when start phase?
Anyway, I always thought that BlockThreshold is fair enough for general cases 
of NameNode restart, so +1 for this improvement from my side.

> Lazy calculate live datanodes in safe mode tip
> --
>
> Key: HDFS-15594
> URL: https://issues.apache.org/jira/browse/HDFS-15594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Ye Ni
>Assignee: Ye Ni
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Safe mode tip is printed every 20 seconds.
> This change is to calculate live datanodes until reported block threshold is 
> meet.
>  Old 
> {code:java}
> STATE* Safe mode ON. The reported blocks 111054015 needs additional 27902753 
> blocks to reach the threshold 0.9990 of total blocks 139095856. The number of 
> live datanodes 2531 has reached the minimum number 1. Safe mode will be 
> turned off automatically once the thresholds have been reached.{code}
> New 
> {code:java}
> STATE* Safe mode ON. 
> The reported blocks 134851250 needs additional 3218494 blocks to reach the 
> threshold 0.9990 of total blocks 138207947.
> The number of live datanodes is not calculated since reported blocks hasn't 
> reached the threshold. Safe mode will be turned off automatically once the 
> thresholds have been reached.{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15597) ContentSummary.getSpaceConsumed does not consider replication

2020-09-23 Thread Ajmal Ahammed (Jira)
Ajmal Ahammed created HDFS-15597:


 Summary: ContentSummary.getSpaceConsumed does not consider 
replication
 Key: HDFS-15597
 URL: https://issues.apache.org/jira/browse/HDFS-15597
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: dfs
Affects Versions: 2.6.0
Reporter: Ajmal Ahammed


I am trying to get the disk space consumed by an HDFS directory using the 
{{ContentSummary.getSpaceConsumed}} method. I can't get the space consumption 
correctly considering the replication factor. The replication factor is 2, and 
I was expecting twice the size of the actual file size from the above method.

I can't get the space consumption correctly considering the replication factor. 
The replication factor is 2, and I was expecting twice the size of the actual 
file size from the above method.

{code}
ubuntu@ubuntu:~/ht$ sudo -u hdfs hdfs dfs -ls /var/lib/ubuntu
Found 2 items
-rw-r--r--   2 ubuntu ubuntu3145728 2020-09-08 09:55 
/var/lib/ubuntu/size-test
drwxrwxr-x   - ubuntu ubuntu  0 2020-09-07 06:37 /var/lib/ubuntu/test
{code}

But when I run the following code,
{code}
String path = "/etc/hadoop/conf/";
conf.addResource(new Path(path + "core-site.xml"));
conf.addResource(new Path(path + "hdfs-site.xml"));
long size = 
FileContext.getFileContext(conf).util().getContentSummary(fileStatus).getSpaceConsumed();
System.out.println("Replication : " + fileStatus.getReplication());
System.out.println("File size : " + size);
{code}

The output is

{code}
Replication : 0
File size : 3145728
{code}
Both the file size and the replication factor seems to be incorrect.


/etc/hadoop/conf/hdfs-site.xml contains the following config:

{code}
  
dfs.replication
2
  
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15595) TestSnapshotCommands.testMaxSnapshotLimit fails in trunk

2020-09-23 Thread Shashikant Banerjee (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201254#comment-17201254
 ] 

Shashikant Banerjee commented on HDFS-15595:


Thanks [~liuml07] for filing the issue. The test failure will be addressed with 
https://issues.apache.org/jira/browse/HDFS-15590.

> TestSnapshotCommands.testMaxSnapshotLimit fails in trunk
> 
>
> Key: HDFS-15595
> URL: https://issues.apache.org/jira/browse/HDFS-15595
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, snapshots, test
>Reporter: Mingliang Liu
>Assignee: Shashikant Banerjee
>Priority: Major
>
> See 
> [this|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/1/testReport/org.apache.hadoop.hdfs/TestSnapshotCommands/testMaxSnapshotLimit/]
>  for a sample error.
> Sample error stack:
> {quote}
> Error Message
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
> Stacktrace
> java.lang.AssertionError: 
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.apache.hadoop.hdfs.DFSTestUtil.toolRun(DFSTestUtil.java:1934)
>   at org.apache.hadoop.hdfs.DFSTestUtil.FsShellRun(DFSTestUtil.java:1942)
>   at 
> org.apache.hadoop.hdfs.TestSnapshotCommands.testMaxSnapshotLimit(TestSnapshotCommands.java:130)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {quote}
> I can also reproduce this locally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15595) TestSnapshotCommands.testMaxSnapshotLimit fails in trunk

2020-09-23 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee reassigned HDFS-15595:
--

Assignee: Shashikant Banerjee

> TestSnapshotCommands.testMaxSnapshotLimit fails in trunk
> 
>
> Key: HDFS-15595
> URL: https://issues.apache.org/jira/browse/HDFS-15595
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, snapshots, test
>Reporter: Mingliang Liu
>Assignee: Shashikant Banerjee
>Priority: Major
>
> See 
> [this|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/1/testReport/org.apache.hadoop.hdfs/TestSnapshotCommands/testMaxSnapshotLimit/]
>  for a sample error.
> Sample error stack:
> {quote}
> Error Message
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
> Stacktrace
> java.lang.AssertionError: 
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.apache.hadoop.hdfs.DFSTestUtil.toolRun(DFSTestUtil.java:1934)
>   at org.apache.hadoop.hdfs.DFSTestUtil.FsShellRun(DFSTestUtil.java:1942)
>   at 
> org.apache.hadoop.hdfs.TestSnapshotCommands.testMaxSnapshotLimit(TestSnapshotCommands.java:130)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {quote}
> I can also reproduce this locally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=489954=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-489954
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 24/Sep/20 02:41
Start Date: 24/Sep/20 02:41
Worklog Time Spent: 10m 
  Work Description: liuml07 commented on pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#issuecomment-698076876


   Yeah; for this checkstyle in test we can keep it as-is for the sake of code 
consistency. Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 489954)
Time Spent: 10h 50m  (was: 10h 40m)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 10h 50m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15596) ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, progress, checksumOpt) should not be restricted to DFS only.

2020-09-23 Thread Uma Maheswara Rao G (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-15596:
---
Environment: (was: The ViewHDFS#create(f, permission, cflags, 
bufferSize, replication, blockSize, progress, checksumOpt) API already 
available in FileSystem. It will use other overloaded API and finally can go to 
ViewFileSystem. This case works in regular ViewFileSystem also. With ViewHDFS, 
we restricted this to DFS only which cause discp to fail when target is non 
hdfs as it's using this API.)

> ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, 
> progress, checksumOpt) should not be restricted to DFS only.
> ---
>
> Key: HDFS-15596
> URL: https://issues.apache.org/jira/browse/HDFS-15596
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15596) ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, progress, checksumOpt) should not be restricted to DFS only.

2020-09-23 Thread Uma Maheswara Rao G (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-15596:
---
Description: The ViewHDFS#create(f, permission, cflags, bufferSize, 
replication, blockSize, progress, checksumOpt) API already available in 
FileSystem. It will use other overloaded API and finally can go to 
ViewFileSystem. This case works in regular ViewFileSystem also. With ViewHDFS, 
we restricted this to DFS only which cause discp to fail when target is non 
hdfs as it's using this API.

> ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, 
> progress, checksumOpt) should not be restricted to DFS only.
> ---
>
> Key: HDFS-15596
> URL: https://issues.apache.org/jira/browse/HDFS-15596
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>
> The ViewHDFS#create(f, permission, cflags, bufferSize, replication, 
> blockSize, progress, checksumOpt) API already available in FileSystem. It 
> will use other overloaded API and finally can go to ViewFileSystem. This case 
> works in regular ViewFileSystem also. With ViewHDFS, we restricted this to 
> DFS only which cause discp to fail when target is non hdfs as it's using this 
> API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15596) ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, progress, checksumOpt) should not be restricted to DFS only.

2020-09-23 Thread Uma Maheswara Rao G (Jira)
Uma Maheswara Rao G created HDFS-15596:
--

 Summary: ViewHDFS#create(f, permission, cflags, bufferSize, 
replication, blockSize, progress, checksumOpt) should not be restricted to DFS 
only.
 Key: HDFS-15596
 URL: https://issues.apache.org/jira/browse/HDFS-15596
 Project: Hadoop HDFS
  Issue Type: Sub-task
 Environment: The ViewHDFS#create(f, permission, cflags, bufferSize, 
replication, blockSize, progress, checksumOpt) API already available in 
FileSystem. It will use other overloaded API and finally can go to 
ViewFileSystem. This case works in regular ViewFileSystem also. With ViewHDFS, 
we restricted this to DFS only which cause discp to fail when target is non 
hdfs as it's using this API.
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=489946=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-489946
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 24/Sep/20 02:17
Start Date: 24/Sep/20 02:17
Worklog Time Spent: 10m 
  Work Description: huangtianhua commented on pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#issuecomment-698070550


   @liuml07 ,  thanks, and we have updated the code, seems there is a 
checkstyle about the storage enum name of 'nvdimm' should keep as it is to 
consistent with other storage types, what do you think?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 489946)
Time Spent: 10h 40m  (was: 10.5h)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 10h 40m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15594) Lazy calculate live datanodes in safe mode tip

2020-09-23 Thread Ye Ni (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ye Ni updated HDFS-15594:
-
Description: 
Safe mode tip is printed every 20 seconds.

This change is to calculate live datanodes until reported block threshold is 
meet.
 Old 
{code:java}
STATE* Safe mode ON. The reported blocks 111054015 needs additional 27902753 
blocks to reach the threshold 0.9990 of total blocks 139095856. The number of 
live datanodes 2531 has reached the minimum number 1. Safe mode will be turned 
off automatically once the thresholds have been reached.{code}
New 
{code:java}
STATE* Safe mode ON. 
The reported blocks 134851250 needs additional 3218494 blocks to reach the 
threshold 0.9990 of total blocks 138207947.
The number of live datanodes is not calculated since reported blocks hasn't 
reached the threshold. Safe mode will be turned off automatically once the 
thresholds have been reached.{code}
 

  was:
Safe mode tip is printed every 20 seconds.

This change is to calculate live datanodes until reported block threshold is 
meet.
 Old 
{code:java}
STATE* Safe mode ON. The reported blocks 111054015 needs additional 27902753 
blocks to reach the threshold 0.9990 of total blocks 139095856. The number of 
live datanodes 2531 has reached the minimum number 1. Safe mode will be turned 
off automatically once the thresholds have been reached.{code}
New 
{code:java}
STATE* Safe mode ON. 
The reported blocks 134851250 needs additional 3218494 blocks to reach the 
threshold 0.9990 of total blocks 138207947.
The minimum number of live datanodes is not calculated since reported blocks 
hasn't reached the threshold. Safe mode will be turned off automatically once 
the thresholds have been reached.{code}
 


> Lazy calculate live datanodes in safe mode tip
> --
>
> Key: HDFS-15594
> URL: https://issues.apache.org/jira/browse/HDFS-15594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Ye Ni
>Assignee: Ye Ni
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Safe mode tip is printed every 20 seconds.
> This change is to calculate live datanodes until reported block threshold is 
> meet.
>  Old 
> {code:java}
> STATE* Safe mode ON. The reported blocks 111054015 needs additional 27902753 
> blocks to reach the threshold 0.9990 of total blocks 139095856. The number of 
> live datanodes 2531 has reached the minimum number 1. Safe mode will be 
> turned off automatically once the thresholds have been reached.{code}
> New 
> {code:java}
> STATE* Safe mode ON. 
> The reported blocks 134851250 needs additional 3218494 blocks to reach the 
> threshold 0.9990 of total blocks 138207947.
> The number of live datanodes is not calculated since reported blocks hasn't 
> reached the threshold. Safe mode will be turned off automatically once the 
> thresholds have been reached.{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15595) TestSnapshotCommands.testMaxSnapshotLimit fails in trunk

2020-09-23 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15595:
-
Summary: TestSnapshotCommands.testMaxSnapshotLimit fails in trunk  (was: 
stSnapshotCommands.testMaxSnapshotLimit fails in trunk)

> TestSnapshotCommands.testMaxSnapshotLimit fails in trunk
> 
>
> Key: HDFS-15595
> URL: https://issues.apache.org/jira/browse/HDFS-15595
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, snapshots, test
>Reporter: Mingliang Liu
>Priority: Major
>
> See 
> [this|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/1/testReport/org.apache.hadoop.hdfs/TestSnapshotCommands/testMaxSnapshotLimit/]
>  for a sample error.
> Sample error stack:
> {quote}
> Error Message
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
> Stacktrace
> java.lang.AssertionError: 
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.apache.hadoop.hdfs.DFSTestUtil.toolRun(DFSTestUtil.java:1934)
>   at org.apache.hadoop.hdfs.DFSTestUtil.FsShellRun(DFSTestUtil.java:1942)
>   at 
> org.apache.hadoop.hdfs.TestSnapshotCommands.testMaxSnapshotLimit(TestSnapshotCommands.java:130)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {quote}
> I can also reproduce this locally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15595) stSnapshotCommands.testMaxSnapshotLimit fails in trunk

2020-09-23 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201093#comment-17201093
 ] 

Mingliang Liu commented on HDFS-15595:
--

CC: [~shashikant] [~szetszwo] and [~msingh]


> stSnapshotCommands.testMaxSnapshotLimit fails in trunk
> --
>
> Key: HDFS-15595
> URL: https://issues.apache.org/jira/browse/HDFS-15595
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, snapshots, test
>Reporter: Mingliang Liu
>Priority: Major
>
> See 
> [this|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/1/testReport/org.apache.hadoop.hdfs/TestSnapshotCommands/testMaxSnapshotLimit/]
>  for a sample error.
> Sample error stack:
> {quote}
> Error Message
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
> Stacktrace
> java.lang.AssertionError: 
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.apache.hadoop.hdfs.DFSTestUtil.toolRun(DFSTestUtil.java:1934)
>   at org.apache.hadoop.hdfs.DFSTestUtil.FsShellRun(DFSTestUtil.java:1942)
>   at 
> org.apache.hadoop.hdfs.TestSnapshotCommands.testMaxSnapshotLimit(TestSnapshotCommands.java:130)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {quote}
> I can also reproduce this locally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15595) stSnapshotCommands.testMaxSnapshotLimit fails in trunk

2020-09-23 Thread Mingliang Liu (Jira)
Mingliang Liu created HDFS-15595:


 Summary: stSnapshotCommands.testMaxSnapshotLimit fails in trunk
 Key: HDFS-15595
 URL: https://issues.apache.org/jira/browse/HDFS-15595
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs, snapshots, test
Reporter: Mingliang Liu


See 
[this|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/1/testReport/org.apache.hadoop.hdfs/TestSnapshotCommands/testMaxSnapshotLimit/]
 for a sample error.

Sample error stack:
{quote}
Error Message
The real output is: createSnapshot: Failed to create snapshot: there are 
already 4 snapshot(s) and the per directory snapshot limit is 3
.
 It should contain: Failed to add snapshot: there are already 3 snapshot(s) and 
the max snapshot limit is 3
Stacktrace
java.lang.AssertionError: 
The real output is: createSnapshot: Failed to create snapshot: there are 
already 4 snapshot(s) and the per directory snapshot limit is 3
.
 It should contain: Failed to add snapshot: there are already 3 snapshot(s) and 
the max snapshot limit is 3
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.apache.hadoop.hdfs.DFSTestUtil.toolRun(DFSTestUtil.java:1934)
at org.apache.hadoop.hdfs.DFSTestUtil.FsShellRun(DFSTestUtil.java:1942)
at 
org.apache.hadoop.hdfs.TestSnapshotCommands.testMaxSnapshotLimit(TestSnapshotCommands.java:130)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
{quote}

I can also reproduce this locally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15595) stSnapshotCommands.testMaxSnapshotLimit fails in trunk

2020-09-23 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15595:
-
Target Version/s: 3.4.0

> stSnapshotCommands.testMaxSnapshotLimit fails in trunk
> --
>
> Key: HDFS-15595
> URL: https://issues.apache.org/jira/browse/HDFS-15595
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, snapshots, test
>Reporter: Mingliang Liu
>Priority: Major
>
> See 
> [this|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/1/testReport/org.apache.hadoop.hdfs/TestSnapshotCommands/testMaxSnapshotLimit/]
>  for a sample error.
> Sample error stack:
> {quote}
> Error Message
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
> Stacktrace
> java.lang.AssertionError: 
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.apache.hadoop.hdfs.DFSTestUtil.toolRun(DFSTestUtil.java:1934)
>   at org.apache.hadoop.hdfs.DFSTestUtil.FsShellRun(DFSTestUtil.java:1942)
>   at 
> org.apache.hadoop.hdfs.TestSnapshotCommands.testMaxSnapshotLimit(TestSnapshotCommands.java:130)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {quote}
> I can also reproduce this locally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15594) Lazy calculate live datanodes in safe mode tip

2020-09-23 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201072#comment-17201072
 ] 

Íñigo Goiri commented on HDFS-15594:


[~hexiaoqiao], you did something similar in HDFS-14632, can you take a look?

> Lazy calculate live datanodes in safe mode tip
> --
>
> Key: HDFS-15594
> URL: https://issues.apache.org/jira/browse/HDFS-15594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Ye Ni
>Assignee: Ye Ni
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Safe mode tip is printed every 20 seconds.
> This change is to calculate live datanodes until reported block threshold is 
> meet.
>  Old 
> {code:java}
> STATE* Safe mode ON. The reported blocks 111054015 needs additional 27902753 
> blocks to reach the threshold 0.9990 of total blocks 139095856. The number of 
> live datanodes 2531 has reached the minimum number 1. Safe mode will be 
> turned off automatically once the thresholds have been reached.{code}
> New 
> {code:java}
> STATE* Safe mode ON. 
> The reported blocks 134851250 needs additional 3218494 blocks to reach the 
> threshold 0.9990 of total blocks 138207947.
> The minimum number of live datanodes is not calculated since reported blocks 
> hasn't reached the threshold. Safe mode will be turned off automatically once 
> the thresholds have been reached.{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15594) Lazy calculate live datanodes in safe mode tip

2020-09-23 Thread Ye Ni (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ye Ni updated HDFS-15594:
-
Status: Patch Available  (was: Open)

> Lazy calculate live datanodes in safe mode tip
> --
>
> Key: HDFS-15594
> URL: https://issues.apache.org/jira/browse/HDFS-15594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Ye Ni
>Assignee: Ye Ni
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Safe mode tip is printed every 20 seconds.
> This change is to calculate live datanodes until reported block threshold is 
> meet.
>  Old 
> {code:java}
> STATE* Safe mode ON. The reported blocks 111054015 needs additional 27902753 
> blocks to reach the threshold 0.9990 of total blocks 139095856. The number of 
> live datanodes 2531 has reached the minimum number 1. Safe mode will be 
> turned off automatically once the thresholds have been reached.{code}
> New 
> {code:java}
> STATE* Safe mode ON. 
> The reported blocks 134851250 needs additional 3218494 blocks to reach the 
> threshold 0.9990 of total blocks 138207947.
> The minimum number of live datanodes is not calculated since reported blocks 
> hasn't reached the threshold. Safe mode will be turned off automatically once 
> the thresholds have been reached.{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15594) Lazy calculate live datanodes in safe mode tip

2020-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15594:
--
Labels: pull-request-available  (was: )

> Lazy calculate live datanodes in safe mode tip
> --
>
> Key: HDFS-15594
> URL: https://issues.apache.org/jira/browse/HDFS-15594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Ye Ni
>Assignee: Ye Ni
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Safe mode tip is printed every 20 seconds.
> This change is to calculate live datanodes until reported block threshold is 
> meet.
>  Old 
> {code:java}
> STATE* Safe mode ON. The reported blocks 111054015 needs additional 27902753 
> blocks to reach the threshold 0.9990 of total blocks 139095856. The number of 
> live datanodes 2531 has reached the minimum number 1. Safe mode will be 
> turned off automatically once the thresholds have been reached.{code}
> New 
> {code:java}
> STATE* Safe mode ON. 
> The reported blocks 134851250 needs additional 3218494 blocks to reach the 
> threshold 0.9990 of total blocks 138207947.
> The minimum number of live datanodes is not calculated since reported blocks 
> hasn't reached the threshold. Safe mode will be turned off automatically once 
> the thresholds have been reached.{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15594) Lazy calculate live datanodes in safe mode tip

2020-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15594?focusedWorklogId=489817=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-489817
 ]

ASF GitHub Bot logged work on HDFS-15594:
-

Author: ASF GitHub Bot
Created on: 23/Sep/20 20:12
Start Date: 23/Sep/20 20:12
Worklog Time Spent: 10m 
  Work Description: NickyYe opened a new pull request #2332:
URL: https://github.com/apache/hadoop/pull/2332


   https://issues.apache.org/jira/browse/HDFS-15594
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 489817)
Remaining Estimate: 0h
Time Spent: 10m

> Lazy calculate live datanodes in safe mode tip
> --
>
> Key: HDFS-15594
> URL: https://issues.apache.org/jira/browse/HDFS-15594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Ye Ni
>Assignee: Ye Ni
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Safe mode tip is printed every 20 seconds.
> This change is to calculate live datanodes until reported block threshold is 
> meet.
>  Old 
> {code:java}
> STATE* Safe mode ON. The reported blocks 111054015 needs additional 27902753 
> blocks to reach the threshold 0.9990 of total blocks 139095856. The number of 
> live datanodes 2531 has reached the minimum number 1. Safe mode will be 
> turned off automatically once the thresholds have been reached.{code}
> New 
> {code:java}
> STATE* Safe mode ON. 
> The reported blocks 134851250 needs additional 3218494 blocks to reach the 
> threshold 0.9990 of total blocks 138207947.
> The minimum number of live datanodes is not calculated since reported blocks 
> hasn't reached the threshold. Safe mode will be turned off automatically once 
> the thresholds have been reached.{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15594) Lazy calculate live datanodes in safe mode tip

2020-09-23 Thread Ye Ni (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ye Ni reassigned HDFS-15594:


Assignee: Ye Ni

> Lazy calculate live datanodes in safe mode tip
> --
>
> Key: HDFS-15594
> URL: https://issues.apache.org/jira/browse/HDFS-15594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Ye Ni
>Assignee: Ye Ni
>Priority: Minor
>
> Safe mode tip is printed every 20 seconds.
> This change is to calculate live datanodes until reported block threshold is 
> meet.
>  Old 
> {code:java}
> STATE* Safe mode ON. The reported blocks 111054015 needs additional 27902753 
> blocks to reach the threshold 0.9990 of total blocks 139095856. The number of 
> live datanodes 2531 has reached the minimum number 1. Safe mode will be 
> turned off automatically once the thresholds have been reached.{code}
> New 
> {code:java}
> STATE* Safe mode ON. 
> The reported blocks 134851250 needs additional 3218494 blocks to reach the 
> threshold 0.9990 of total blocks 138207947.
> The minimum number of live datanodes is not calculated since reported blocks 
> hasn't reached the threshold. Safe mode will be turned off automatically once 
> the thresholds have been reached.{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15594) Lazy calculate live datanodes in safe mode tip

2020-09-23 Thread Ye Ni (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ye Ni updated HDFS-15594:
-
Description: 
Safe mode tip is printed every 20 seconds.

This change is to calculate live datanodes until reported block threshold is 
meet.
 This could further reduce safe mode time from 1 hour to 45 minutes in 
MTPrime-CO4-3.

Old 
{code:java}
STATE* Safe mode ON. The reported blocks 111054015 needs additional 27902753 
blocks to reach the threshold 0.9990 of total blocks 139095856. The number of 
live datanodes 2531 has reached the minimum number 1. Safe mode will be turned 
off automatically once the thresholds have been reached.{code}
New 
{code:java}
STATE* Safe mode ON. 
The reported blocks 134851250 needs additional 3218494 blocks to reach the 
threshold 0.9990 of total blocks 138207947.
The minimum number of live datanodes is not calculated since reported blocks 
hasn't reached the threshold. Safe mode will be turned off automatically once 
the thresholds have been reached.{code}
{{}}

  was:
Safe mode tip is printed every 20 seconds.

This change is to calculate live datanodes until reported block threshold is 
meet.
This could further reduce safe mode time from 1 hour to 45 minutes in 
MTPrime-CO4-3.

Old 

{{}}
{code:java}
STATE* Safe mode ON. The reported blocks 111054015 needs additional 27902753 
blocks to reach the threshold 0.9990 of total blocks 139095856. The number of 
live datanodes 2531 has reached the minimum number 1. Safe mode will be turned 
off automatically once the thresholds have been reached.{code}
{{}}

New 

{{}}
{code:java}
STATE* Safe mode ON. 
The reported blocks 134851250 needs additional 3218494 blocks to reach the 
threshold 0.9990 of total blocks 138207947.
The minimum number of live datanodes is not calculated since reported blocks 
hasn't reached the threshold. Safe mode will be turned off automatically once 
the thresholds have been reached.{code}
{{}}


> Lazy calculate live datanodes in safe mode tip
> --
>
> Key: HDFS-15594
> URL: https://issues.apache.org/jira/browse/HDFS-15594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Ye Ni
>Priority: Minor
>
> Safe mode tip is printed every 20 seconds.
> This change is to calculate live datanodes until reported block threshold is 
> meet.
>  This could further reduce safe mode time from 1 hour to 45 minutes in 
> MTPrime-CO4-3.
> Old 
> {code:java}
> STATE* Safe mode ON. The reported blocks 111054015 needs additional 27902753 
> blocks to reach the threshold 0.9990 of total blocks 139095856. The number of 
> live datanodes 2531 has reached the minimum number 1. Safe mode will be 
> turned off automatically once the thresholds have been reached.{code}
> New 
> {code:java}
> STATE* Safe mode ON. 
> The reported blocks 134851250 needs additional 3218494 blocks to reach the 
> threshold 0.9990 of total blocks 138207947.
> The minimum number of live datanodes is not calculated since reported blocks 
> hasn't reached the threshold. Safe mode will be turned off automatically once 
> the thresholds have been reached.{code}
> {{}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15594) Lazy calculate live datanodes in safe mode tip

2020-09-23 Thread Ye Ni (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ye Ni updated HDFS-15594:
-
Description: 
Safe mode tip is printed every 20 seconds.

This change is to calculate live datanodes until reported block threshold is 
meet.
 Old 
{code:java}
STATE* Safe mode ON. The reported blocks 111054015 needs additional 27902753 
blocks to reach the threshold 0.9990 of total blocks 139095856. The number of 
live datanodes 2531 has reached the minimum number 1. Safe mode will be turned 
off automatically once the thresholds have been reached.{code}
New 
{code:java}
STATE* Safe mode ON. 
The reported blocks 134851250 needs additional 3218494 blocks to reach the 
threshold 0.9990 of total blocks 138207947.
The minimum number of live datanodes is not calculated since reported blocks 
hasn't reached the threshold. Safe mode will be turned off automatically once 
the thresholds have been reached.{code}
{{}}

  was:
Safe mode tip is printed every 20 seconds.

This change is to calculate live datanodes until reported block threshold is 
meet.
 This could further reduce safe mode time from 1 hour to 45 minutes in 
MTPrime-CO4-3.

Old 
{code:java}
STATE* Safe mode ON. The reported blocks 111054015 needs additional 27902753 
blocks to reach the threshold 0.9990 of total blocks 139095856. The number of 
live datanodes 2531 has reached the minimum number 1. Safe mode will be turned 
off automatically once the thresholds have been reached.{code}
New 
{code:java}
STATE* Safe mode ON. 
The reported blocks 134851250 needs additional 3218494 blocks to reach the 
threshold 0.9990 of total blocks 138207947.
The minimum number of live datanodes is not calculated since reported blocks 
hasn't reached the threshold. Safe mode will be turned off automatically once 
the thresholds have been reached.{code}
{{}}


> Lazy calculate live datanodes in safe mode tip
> --
>
> Key: HDFS-15594
> URL: https://issues.apache.org/jira/browse/HDFS-15594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Ye Ni
>Priority: Minor
>
> Safe mode tip is printed every 20 seconds.
> This change is to calculate live datanodes until reported block threshold is 
> meet.
>  Old 
> {code:java}
> STATE* Safe mode ON. The reported blocks 111054015 needs additional 27902753 
> blocks to reach the threshold 0.9990 of total blocks 139095856. The number of 
> live datanodes 2531 has reached the minimum number 1. Safe mode will be 
> turned off automatically once the thresholds have been reached.{code}
> New 
> {code:java}
> STATE* Safe mode ON. 
> The reported blocks 134851250 needs additional 3218494 blocks to reach the 
> threshold 0.9990 of total blocks 138207947.
> The minimum number of live datanodes is not calculated since reported blocks 
> hasn't reached the threshold. Safe mode will be turned off automatically once 
> the thresholds have been reached.{code}
> {{}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15594) Lazy calculate live datanodes in safe mode tip

2020-09-23 Thread Ye Ni (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ye Ni updated HDFS-15594:
-
Description: 
Safe mode tip is printed every 20 seconds.

This change is to calculate live datanodes until reported block threshold is 
meet.
 Old 
{code:java}
STATE* Safe mode ON. The reported blocks 111054015 needs additional 27902753 
blocks to reach the threshold 0.9990 of total blocks 139095856. The number of 
live datanodes 2531 has reached the minimum number 1. Safe mode will be turned 
off automatically once the thresholds have been reached.{code}
New 
{code:java}
STATE* Safe mode ON. 
The reported blocks 134851250 needs additional 3218494 blocks to reach the 
threshold 0.9990 of total blocks 138207947.
The minimum number of live datanodes is not calculated since reported blocks 
hasn't reached the threshold. Safe mode will be turned off automatically once 
the thresholds have been reached.{code}
 

  was:
Safe mode tip is printed every 20 seconds.

This change is to calculate live datanodes until reported block threshold is 
meet.
 Old 
{code:java}
STATE* Safe mode ON. The reported blocks 111054015 needs additional 27902753 
blocks to reach the threshold 0.9990 of total blocks 139095856. The number of 
live datanodes 2531 has reached the minimum number 1. Safe mode will be turned 
off automatically once the thresholds have been reached.{code}
New 
{code:java}
STATE* Safe mode ON. 
The reported blocks 134851250 needs additional 3218494 blocks to reach the 
threshold 0.9990 of total blocks 138207947.
The minimum number of live datanodes is not calculated since reported blocks 
hasn't reached the threshold. Safe mode will be turned off automatically once 
the thresholds have been reached.{code}
{{}}


> Lazy calculate live datanodes in safe mode tip
> --
>
> Key: HDFS-15594
> URL: https://issues.apache.org/jira/browse/HDFS-15594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Ye Ni
>Priority: Minor
>
> Safe mode tip is printed every 20 seconds.
> This change is to calculate live datanodes until reported block threshold is 
> meet.
>  Old 
> {code:java}
> STATE* Safe mode ON. The reported blocks 111054015 needs additional 27902753 
> blocks to reach the threshold 0.9990 of total blocks 139095856. The number of 
> live datanodes 2531 has reached the minimum number 1. Safe mode will be 
> turned off automatically once the thresholds have been reached.{code}
> New 
> {code:java}
> STATE* Safe mode ON. 
> The reported blocks 134851250 needs additional 3218494 blocks to reach the 
> threshold 0.9990 of total blocks 138207947.
> The minimum number of live datanodes is not calculated since reported blocks 
> hasn't reached the threshold. Safe mode will be turned off automatically once 
> the thresholds have been reached.{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15594) Lazy calculate live datanodes in safe mode tip

2020-09-23 Thread Ye Ni (Jira)
Ye Ni created HDFS-15594:


 Summary: Lazy calculate live datanodes in safe mode tip
 Key: HDFS-15594
 URL: https://issues.apache.org/jira/browse/HDFS-15594
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Ye Ni


Safe mode tip is printed every 20 seconds.

This change is to calculate live datanodes until reported block threshold is 
meet.
This could further reduce safe mode time from 1 hour to 45 minutes in 
MTPrime-CO4-3.

Old 

{{}}
{code:java}
STATE* Safe mode ON. The reported blocks 111054015 needs additional 27902753 
blocks to reach the threshold 0.9990 of total blocks 139095856. The number of 
live datanodes 2531 has reached the minimum number 1. Safe mode will be turned 
off automatically once the thresholds have been reached.{code}
{{}}

New 

{{}}
{code:java}
STATE* Safe mode ON. 
The reported blocks 134851250 needs additional 3218494 blocks to reach the 
threshold 0.9990 of total blocks 138207947.
The minimum number of live datanodes is not calculated since reported blocks 
hasn't reached the threshold. Safe mode will be turned off automatically once 
the thresholds have been reached.{code}
{{}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=489792=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-489792
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 23/Sep/20 19:40
Start Date: 23/Sep/20 19:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#issuecomment-668645247







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 489792)
Time Spent: 10.5h  (was: 10h 20m)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=489791=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-489791
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 23/Sep/20 19:39
Start Date: 23/Sep/20 19:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#issuecomment-668517822







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 489791)
Time Spent: 10h 20m  (was: 10h 10m)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 10h 20m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15591) RBF: Fix webHdfs file display error

2020-09-23 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200938#comment-17200938
 ] 

Íñigo Goiri commented on HDFS-15591:


[~wangzhaohui], thanks for the patch.
Can you add a test?

> RBF: Fix webHdfs file display error
> ---
>
> Key: HDFS-15591
> URL: https://issues.apache.org/jira/browse/HDFS-15591
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-15591-001.patch, after-1.jpg, after-2.jpg, 
> before-1.jpg, before-2.jpg
>
>
> The path mounted by the router does not exist on NN,router will  create 
> virtual folder with the mount name, but the "browse the file syaytem" display 
> on http is wrong. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15593) Hadoop - Upgrade to JQuery 3.5.1

2020-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15593?focusedWorklogId=489611=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-489611
 ]

ASF GitHub Bot logged work on HDFS-15593:
-

Author: ASF GitHub Bot
Created on: 23/Sep/20 15:07
Start Date: 23/Sep/20 15:07
Worklog Time Spent: 10m 
  Work Description: aryangupta1998 opened a new pull request #2330:
URL: https://github.com/apache/hadoop/pull/2330


   ## NOTICE
   
   jQuery version is being upgraded from jquery-3.4.1.min.js to 
jquery-3.5.1.min.js
   Jira Link - [https://issues.apache.org/jira/browse/HDFS-15593
   ](https://issues.apache.org/jira/browse/HDFS-15593)
   
   Tested Manually. Also, NN UI is working fine.
   https://user-images.githubusercontent.com/44232823/94031374-66f22880-fddc-11ea-8353-76a6fc939c06.png;>
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 489611)
Remaining Estimate: 0h
Time Spent: 10m

> Hadoop - Upgrade to JQuery 3.5.1
> 
>
> Key: HDFS-15593
> URL: https://issues.apache.org/jira/browse/HDFS-15593
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Aryan Gupta
>Assignee: Aryan Gupta
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> jQuery version is being upgraded from jquery-3.4.1.min.js to 
> jquery-3.5.1.min.js



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15593) Hadoop - Upgrade to JQuery 3.5.1

2020-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15593:
--
Labels: pull-request-available  (was: )

> Hadoop - Upgrade to JQuery 3.5.1
> 
>
> Key: HDFS-15593
> URL: https://issues.apache.org/jira/browse/HDFS-15593
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Aryan Gupta
>Assignee: Aryan Gupta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> jQuery version is being upgraded from jquery-3.4.1.min.js to 
> jquery-3.5.1.min.js



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8432) Introduce a minimum compatible layout version to allow downgrade in more rolling upgrade use cases.

2020-09-23 Thread fengwu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-8432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200847#comment-17200847
 ] 

fengwu commented on HDFS-8432:
--

[~heliangjun] ,in my case ,commit HDFS-8791  to hdfs 2.x .

> Introduce a minimum compatible layout version to allow downgrade in more 
> rolling upgrade use cases.
> ---
>
> Key: HDFS-8432
> URL: https://issues.apache.org/jira/browse/HDFS-8432
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, rolling upgrades
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Major
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-8432-HDFS-Downgrade-Extended-Support.pdf, 
> HDFS-8432-branch-2.002.patch, HDFS-8432-branch-2.003.patch, 
> HDFS-8432.001.patch, HDFS-8432.002.patch
>
>
> Maintain the prior layout version during the upgrade window and reject 
> attempts to use new features until after the upgrade has been finalized.  
> This guarantees that the prior software version can read the fsimage and edit 
> logs if the administrator decides to downgrade.  This will make downgrade 
> usable for the majority of NameNode layout version changes, which just 
> involve introduction of new edit log operations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15593) Hadoop - Upgrade to JQuery 3.5.1

2020-09-23 Thread Aryan Gupta (Jira)
Aryan Gupta created HDFS-15593:
--

 Summary: Hadoop - Upgrade to JQuery 3.5.1
 Key: HDFS-15593
 URL: https://issues.apache.org/jira/browse/HDFS-15593
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Reporter: Aryan Gupta
Assignee: Aryan Gupta


jQuery version is being upgraded from jquery-3.4.1.min.js to jquery-3.5.1.min.js



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15590) namenode fails to start when ordered snapshot deletion feature is disabled

2020-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15590?focusedWorklogId=489548=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-489548
 ]

ASF GitHub Bot logged work on HDFS-15590:
-

Author: ASF GitHub Bot
Created on: 23/Sep/20 12:07
Start Date: 23/Sep/20 12:07
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2326:
URL: https://github.com/apache/hadoop/pull/2326#issuecomment-697320549


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 28s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  1s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  29m 57s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 24s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 22s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  0s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 23s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 39s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 104m  4s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 190m 45s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeUUID |
   |   | hadoop.hdfs.server.datanode.TestBPOfferService |
   |   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
   |   | 
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
   |   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
   |   | hadoop.hdfs.TestFileChecksum |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2326 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 9fbfa264c0e2 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 

[jira] [Commented] (HDFS-8432) Introduce a minimum compatible layout version to allow downgrade in more rolling upgrade use cases.

2020-09-23 Thread liangjun he (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-8432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200753#comment-17200753
 ] 

liangjun he commented on HDFS-8432:
---

[~fengwu99] ,We have successfully upgraded our 5000 + nodes cluster from 2.6.0 
to 3.2.1.

> Introduce a minimum compatible layout version to allow downgrade in more 
> rolling upgrade use cases.
> ---
>
> Key: HDFS-8432
> URL: https://issues.apache.org/jira/browse/HDFS-8432
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, rolling upgrades
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Major
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-8432-HDFS-Downgrade-Extended-Support.pdf, 
> HDFS-8432-branch-2.002.patch, HDFS-8432-branch-2.003.patch, 
> HDFS-8432.001.patch, HDFS-8432.002.patch
>
>
> Maintain the prior layout version during the upgrade window and reject 
> attempts to use new features until after the upgrade has been finalized.  
> This guarantees that the prior software version can read the fsimage and edit 
> logs if the administrator decides to downgrade.  This will make downgrade 
> usable for the majority of NameNode layout version changes, which just 
> involve introduction of new edit log operations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8432) Introduce a minimum compatible layout version to allow downgrade in more rolling upgrade use cases.

2020-09-23 Thread liangjun he (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-8432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200750#comment-17200750
 ] 

liangjun he commented on HDFS-8432:
---

hi, [~fengwu99]!  We maintain the layout version of DN by modifying 3.x code.

> Introduce a minimum compatible layout version to allow downgrade in more 
> rolling upgrade use cases.
> ---
>
> Key: HDFS-8432
> URL: https://issues.apache.org/jira/browse/HDFS-8432
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, rolling upgrades
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Major
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-8432-HDFS-Downgrade-Extended-Support.pdf, 
> HDFS-8432-branch-2.002.patch, HDFS-8432-branch-2.003.patch, 
> HDFS-8432.001.patch, HDFS-8432.002.patch
>
>
> Maintain the prior layout version during the upgrade window and reject 
> attempts to use new features until after the upgrade has been finalized.  
> This guarantees that the prior software version can read the fsimage and edit 
> logs if the administrator decides to downgrade.  This will make downgrade 
> usable for the majority of NameNode layout version changes, which just 
> involve introduction of new edit log operations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-15591) RBF: Fix webHdfs file display error

2020-09-23 Thread wangzhaohui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-15591 started by wangzhaohui.
--
> RBF: Fix webHdfs file display error
> ---
>
> Key: HDFS-15591
> URL: https://issues.apache.org/jira/browse/HDFS-15591
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-15591-001.patch, after-1.jpg, after-2.jpg, 
> before-1.jpg, before-2.jpg
>
>
> The path mounted by the router does not exist on NN,router will  create 
> virtual folder with the mount name, but the "browse the file syaytem" display 
> on http is wrong. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15591) RBF: Fix webHdfs file display error

2020-09-23 Thread wangzhaohui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangzhaohui updated HDFS-15591:
---
Status: Open  (was: Patch Available)

> RBF: Fix webHdfs file display error
> ---
>
> Key: HDFS-15591
> URL: https://issues.apache.org/jira/browse/HDFS-15591
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-15591-001.patch, after-1.jpg, after-2.jpg, 
> before-1.jpg, before-2.jpg
>
>
> The path mounted by the router does not exist on NN,router will  create 
> virtual folder with the mount name, but the "browse the file syaytem" display 
> on http is wrong. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15591) RBF: Fix webHdfs file display error

2020-09-23 Thread wangzhaohui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangzhaohui updated HDFS-15591:
---
Status: Patch Available  (was: In Progress)

> RBF: Fix webHdfs file display error
> ---
>
> Key: HDFS-15591
> URL: https://issues.apache.org/jira/browse/HDFS-15591
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-15591-001.patch, after-1.jpg, after-2.jpg, 
> before-1.jpg, before-2.jpg
>
>
> The path mounted by the router does not exist on NN,router will  create 
> virtual folder with the mount name, but the "browse the file syaytem" display 
> on http is wrong. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15592) DistCP fails with ViewHDFS and preserveEC options if the actual target path is non HDFS

2020-09-23 Thread Uma Maheswara Rao G (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-15592:
---
Summary: DistCP fails with ViewHDFS and preserveEC options if the actual 
target path is non HDFS  (was: DistCP fails with ViewHDFS if the actual target 
path is non HDFS)

> DistCP fails with ViewHDFS and preserveEC options if the actual target path 
> is non HDFS
> ---
>
> Key: HDFS-15592
> URL: https://issues.apache.org/jira/browse/HDFS-15592
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfs, ViewHDFS
>Affects Versions: 3.4.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>
> When we configure target path mount point with Ozone (or any other fs), 
> distcp will fail.
> The reason is, if the src path having ec policy enabled, it will try to 
> retain that properties.SO, in this case it is using DFS specific createFile 
> API.
> But here we have to ensure, tareget path can from non hdfs in ViewHDFS case. 
> In RetriayableFIleCopyCommand#copyToFile, we should fix the following piece 
> of code.
>  
> {code:java}
> if (preserveEC && sourceStatus.isErasureCoded()
>  && sourceStatus instanceof HdfsFileStatus
>  && targetFS instanceof DistributedFileSystem) {
>  ecPolicy = ((HdfsFileStatus) sourceStatus).getErasureCodingPolicy();
> }{code}
>  
> Here it's just checking targetFs instanceof DistributedFileSystem, but in 
> ViewHDFS case, fs will be DFS only but actual target can point to mounted fs. 
> So, to handle this case, we should use resolvePath API and check the resolved 
> target path scheme is dfs or or not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-15592) DistCP fails with ViewHDFS if the actual target path is non HDFS

2020-09-23 Thread Uma Maheswara Rao G (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-15592:
---
Comment: was deleted

(was: 
{code:java}
 Error: java.io.IOException: File copy failed: hdfs://ns1/test/test.txt --> 
hdfs://ns1/OzoneTest/test/test.txt at 
org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
 at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:219) at 
org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:48) at 
org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at 
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at 
org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at 
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168) Caused by: 
java.io.IOException: Couldn't run retriable-command: Copying 
hdfs://ns1/test/test.txt to hdfs://ns1/OzoneTest/test/test.txt at 
org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
 at 
org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258)
 ... 10 more Caused by: java.lang.UnsupportedOperationException: This 
API:create is specific to DFS. Can't run on other fs:o3fs://bucket.vol.ozone1 
at 
org.apache.hadoop.hdfs.ViewDistributedFileSystem.checkDFS(ViewDistributedFileSystem.java:402)
 at 
org.apache.hadoop.hdfs.ViewDistributedFileSystem.create(ViewDistributedFileSystem.java:391)
 at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToFile(RetriableFileCopyCommand.java:201)
 at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:143)
 at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:115)
 at 
org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87) 
... 11 more

 
{code}
)

> DistCP fails with ViewHDFS if the actual target path is non HDFS
> 
>
> Key: HDFS-15592
> URL: https://issues.apache.org/jira/browse/HDFS-15592
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfs, ViewHDFS
>Affects Versions: 3.4.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>
> When we configure target path mount point with Ozone (or any other fs), 
> distcp will fail.
> The reason is, if the src path having ec policy enabled, it will try to 
> retain that properties.SO, in this case it is using DFS specific createFile 
> API.
> But here we have to ensure, tareget path can from non hdfs in ViewHDFS case. 
> In RetriayableFIleCopyCommand#copyToFile, we should fix the following piece 
> of code.
>  
> {code:java}
> if (preserveEC && sourceStatus.isErasureCoded()
>  && sourceStatus instanceof HdfsFileStatus
>  && targetFS instanceof DistributedFileSystem) {
>  ecPolicy = ((HdfsFileStatus) sourceStatus).getErasureCodingPolicy();
> }{code}
>  
> Here it's just checking targetFs instanceof DistributedFileSystem, but in 
> ViewHDFS case, fs will be DFS only but actual target can point to mounted fs. 
> So, to handle this case, we should use resolvePath API and check the resolved 
> target path scheme is dfs or or not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15591) RBF: Fix webHdfs file display error

2020-09-23 Thread wangzhaohui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangzhaohui updated HDFS-15591:
---
Status: Patch Available  (was: In Progress)

> RBF: Fix webHdfs file display error
> ---
>
> Key: HDFS-15591
> URL: https://issues.apache.org/jira/browse/HDFS-15591
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-15591-001.patch, after-1.jpg, after-2.jpg, 
> before-1.jpg, before-2.jpg
>
>
> The path mounted by the router does not exist on NN,router will  create 
> virtual folder with the mount name, but the "browse the file syaytem" display 
> on http is wrong. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15592) DistCP fails with ViewHDFS if the actual target path is non HDFS

2020-09-23 Thread Uma Maheswara Rao G (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200597#comment-17200597
 ] 

Uma Maheswara Rao G edited comment on HDFS-15592 at 9/23/20, 6:38 AM:
--


{code:java}
 Error: java.io.IOException: File copy failed: hdfs://ns1/test/test.txt --> 
hdfs://ns1/OzoneTest/test/test.txt at 
org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
 at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:219) at 
org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:48) at 
org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at 
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at 
org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at 
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168) Caused by: 
java.io.IOException: Couldn't run retriable-command: Copying 
hdfs://ns1/test/test.txt to hdfs://ns1/OzoneTest/test/test.txt at 
org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
 at 
org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258)
 ... 10 more Caused by: java.lang.UnsupportedOperationException: This 
API:create is specific to DFS. Can't run on other fs:o3fs://bucket.vol.ozone1 
at 
org.apache.hadoop.hdfs.ViewDistributedFileSystem.checkDFS(ViewDistributedFileSystem.java:402)
 at 
org.apache.hadoop.hdfs.ViewDistributedFileSystem.create(ViewDistributedFileSystem.java:391)
 at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToFile(RetriableFileCopyCommand.java:201)
 at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:143)
 at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:115)
 at 
org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87) 
... 11 more

 
{code}



was (Author: umamaheswararao):
 
{noformat}
 Error: java.io.IOException: File copy failed: hdfs://ns1/test/test.txt --> 
hdfs://ns1/OzoneTest/test/test.txt at 
org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
 at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:219) at 
org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:48) at 
org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at 
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at 
org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at 
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168) Caused by: 
java.io.IOException: Couldn't run retriable-command: Copying 
hdfs://ns1/test/test.txt to hdfs://ns1/OzoneTest/test/test.txt at 
org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
 at 
org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258)
 ... 10 more Caused by: java.lang.UnsupportedOperationException: This 
API:create is specific to DFS. Can't run on other fs:o3fs://bucket.vol.ozone1 
at 
org.apache.hadoop.hdfs.ViewDistributedFileSystem.checkDFS(ViewDistributedFileSystem.java:402)
 at 
org.apache.hadoop.hdfs.ViewDistributedFileSystem.create(ViewDistributedFileSystem.java:391)
 at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToFile(RetriableFileCopyCommand.java:201)
 at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:143)
 at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:115)
 at 
org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87) 
... 11 more
{noformat}
 

> DistCP fails with ViewHDFS if the actual target path is non HDFS
> 
>
> Key: HDFS-15592
> URL: https://issues.apache.org/jira/browse/HDFS-15592
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfs, ViewHDFS
>Affects Versions: 3.4.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>
> When we configure target path mount point with Ozone (or any other fs), 
> distcp will fail.
> The reason is, if the src path having ec policy enabled, it will try to 
> retain that properties.SO, in this case it is using DFS specific createFile 
> API.
> But 

[jira] [Commented] (HDFS-15592) DistCP fails with ViewHDFS if the actual target path is non HDFS

2020-09-23 Thread Uma Maheswara Rao G (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200597#comment-17200597
 ] 

Uma Maheswara Rao G commented on HDFS-15592:


 
{noformat}
 Error: java.io.IOException: File copy failed: hdfs://ns1/test/test.txt --> 
hdfs://ns1/OzoneTest/test/test.txt at 
org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
 at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:219) at 
org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:48) at 
org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at 
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at 
org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at 
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168) Caused by: 
java.io.IOException: Couldn't run retriable-command: Copying 
hdfs://ns1/test/test.txt to hdfs://ns1/OzoneTest/test/test.txt at 
org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
 at 
org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258)
 ... 10 more Caused by: java.lang.UnsupportedOperationException: This 
API:create is specific to DFS. Can't run on other fs:o3fs://bucket.vol.ozone1 
at 
org.apache.hadoop.hdfs.ViewDistributedFileSystem.checkDFS(ViewDistributedFileSystem.java:402)
 at 
org.apache.hadoop.hdfs.ViewDistributedFileSystem.create(ViewDistributedFileSystem.java:391)
 at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToFile(RetriableFileCopyCommand.java:201)
 at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:143)
 at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:115)
 at 
org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87) 
... 11 more
{noformat}
 

> DistCP fails with ViewHDFS if the actual target path is non HDFS
> 
>
> Key: HDFS-15592
> URL: https://issues.apache.org/jira/browse/HDFS-15592
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfs, ViewHDFS
>Affects Versions: 3.4.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>
> When we configure target path mount point with Ozone (or any other fs), 
> distcp will fail.
> The reason is, if the src path having ec policy enabled, it will try to 
> retain that properties.SO, in this case it is using DFS specific createFile 
> API.
> But here we have to ensure, tareget path can from non hdfs in ViewHDFS case. 
> In RetriayableFIleCopyCommand#copyToFile, we should fix the following piece 
> of code.
>  
> {code:java}
> if (preserveEC && sourceStatus.isErasureCoded()
>  && sourceStatus instanceof HdfsFileStatus
>  && targetFS instanceof DistributedFileSystem) {
>  ecPolicy = ((HdfsFileStatus) sourceStatus).getErasureCodingPolicy();
> }{code}
>  
> Here it's just checking targetFs instanceof DistributedFileSystem, but in 
> ViewHDFS case, fs will be DFS only but actual target can point to mounted fs. 
> So, to handle this case, we should use resolvePath API and check the resolved 
> target path scheme is dfs or or not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-15591) RBF: Fix webHdfs file display error

2020-09-23 Thread wangzhaohui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-15591 started by wangzhaohui.
--
> RBF: Fix webHdfs file display error
> ---
>
> Key: HDFS-15591
> URL: https://issues.apache.org/jira/browse/HDFS-15591
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-15591-001.patch, after-1.jpg, after-2.jpg, 
> before-1.jpg, before-2.jpg
>
>
> The path mounted by the router does not exist on NN,router will  create 
> virtual folder with the mount name, but the "browse the file syaytem" display 
> on http is wrong. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15592) DistCP fails with ViewHDFS if the actual target path is non HDFS

2020-09-23 Thread Uma Maheswara Rao G (Jira)
Uma Maheswara Rao G created HDFS-15592:
--

 Summary: DistCP fails with ViewHDFS if the actual target path is 
non HDFS
 Key: HDFS-15592
 URL: https://issues.apache.org/jira/browse/HDFS-15592
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ViewHDFS, viewfs
Affects Versions: 3.4.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G


When we configure target path mount point with Ozone (or any other fs), distcp 
will fail.

The reason is, if the src path having ec policy enabled, it will try to retain 
that properties.SO, in this case it is using DFS specific createFile API.
But here we have to ensure, tareget path can from non hdfs in ViewHDFS case. 

In RetriayableFIleCopyCommand#copyToFile, we should fix the following piece of 
code.

 
{code:java}
if (preserveEC && sourceStatus.isErasureCoded()
 && sourceStatus instanceof HdfsFileStatus
 && targetFS instanceof DistributedFileSystem) {
 ecPolicy = ((HdfsFileStatus) sourceStatus).getErasureCodingPolicy();
}{code}
 

Here it's just checking targetFs instanceof DistributedFileSystem, but in 
ViewHDFS case, fs will be DFS only but actual target can point to mounted fs. 
So, to handle this case, we should use resolvePath API and check the resolved 
target path scheme is dfs or or not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15591) RBF: Fix webHdfs file display error

2020-09-23 Thread wangzhaohui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangzhaohui updated HDFS-15591:
---
Attachment: before-2.jpg

> RBF: Fix webHdfs file display error
> ---
>
> Key: HDFS-15591
> URL: https://issues.apache.org/jira/browse/HDFS-15591
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-15591-001.patch, after-1.jpg, after-2.jpg, 
> before-1.jpg, before-2.jpg
>
>
> The path mounted by the router does not exist on NN,router will  create 
> virtual folder with the mount name, but the "browse the file syaytem" display 
> on http is wrong. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15591) RBF: Fix webHdfs file display error

2020-09-23 Thread wangzhaohui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangzhaohui updated HDFS-15591:
---
Attachment: after-2.jpg

> RBF: Fix webHdfs file display error
> ---
>
> Key: HDFS-15591
> URL: https://issues.apache.org/jira/browse/HDFS-15591
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-15591-001.patch, after-1.jpg, after-2.jpg, 
> before-1.jpg, before-2.jpg
>
>
> The path mounted by the router does not exist on NN,router will  create 
> virtual folder with the mount name, but the "browse the file syaytem" display 
> on http is wrong. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15591) RBF: Fix webHdfs file display error

2020-09-23 Thread wangzhaohui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangzhaohui updated HDFS-15591:
---
Attachment: HDFS-15591-001.patch

> RBF: Fix webHdfs file display error
> ---
>
> Key: HDFS-15591
> URL: https://issues.apache.org/jira/browse/HDFS-15591
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-15591-001.patch, after-1.jpg, after-2.jpg, 
> before-1.jpg, before-2.jpg
>
>
> The path mounted by the router does not exist on NN,router will  create 
> virtual folder with the mount name, but the "browse the file syaytem" display 
> on http is wrong. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15591) RBF: Fix webHdfs file display error

2020-09-23 Thread wangzhaohui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangzhaohui updated HDFS-15591:
---
Attachment: before-1.jpg

> RBF: Fix webHdfs file display error
> ---
>
> Key: HDFS-15591
> URL: https://issues.apache.org/jira/browse/HDFS-15591
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-15591-001.patch, after-1.jpg, after-2.jpg, 
> before-1.jpg, before-2.jpg
>
>
> The path mounted by the router does not exist on NN,router will  create 
> virtual folder with the mount name, but the "browse the file syaytem" display 
> on http is wrong. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15591) RBF: Fix webHdfs file display error

2020-09-23 Thread wangzhaohui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangzhaohui updated HDFS-15591:
---
Attachment: after-1.jpg

> RBF: Fix webHdfs file display error
> ---
>
> Key: HDFS-15591
> URL: https://issues.apache.org/jira/browse/HDFS-15591
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-15591-001.patch, after-1.jpg, after-2.jpg, 
> before-1.jpg, before-2.jpg
>
>
> The path mounted by the router does not exist on NN,router will  create 
> virtual folder with the mount name, but the "browse the file syaytem" display 
> on http is wrong. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=489417=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-489417
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 23/Sep/20 06:32
Start Date: 23/Sep/20 06:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#issuecomment-697162885


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 16 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 24s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m 19s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  26m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  22m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m 41s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 32s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 16s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |  10m  3s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  cc  |  20m 54s | 
[/diff-compile-cc-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2189/17/artifact/out/diff-compile-cc-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 32 new + 131 unchanged - 
32 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  javac  |  20m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 48s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  cc  |  16m 48s | 
[/diff-compile-cc-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2189/17/artifact/out/diff-compile-cc-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 37 new + 126 
unchanged - 37 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  javac  |  16m 48s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 57s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2189/17/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 16 new + 723 unchanged - 6 fixed = 739 total (was 
729)  |
   | +1 :green_heart: |  mvnsite  |   4m  3s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 15s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   4m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   8m 21s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 22s |  |  hadoop-common in the patch 

[jira] [Created] (HDFS-15591) RBF: Fix webHdfs file display error

2020-09-23 Thread wangzhaohui (Jira)
wangzhaohui created HDFS-15591:
--

 Summary: RBF: Fix webHdfs file display error
 Key: HDFS-15591
 URL: https://issues.apache.org/jira/browse/HDFS-15591
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: wangzhaohui
Assignee: wangzhaohui


The path mounted by the router does not exist on NN,router will  create virtual 
folder with the mount name, but the "browse the file syaytem" display on http 
is wrong. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15590) namenode fails to start when ordered snapshot deletion feature is disabled

2020-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15590?focusedWorklogId=489404=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-489404
 ]

ASF GitHub Bot logged work on HDFS-15590:
-

Author: ASF GitHub Bot
Created on: 23/Sep/20 06:01
Start Date: 23/Sep/20 06:01
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #2326:
URL: https://github.com/apache/hadoop/pull/2326#issuecomment-697152099


   > How about failing the second user delete request with a "Already marked as 
deleted" exception instead of changing the edit log loading? It is hard for 
editlog loading to guess if the command is valid.
   
   Thanks @szetszwo . The problem is just not with user deletes. Let's see a 
sequence like this:
   1) Let's say we have 2 snapshots s1 and S2 after enabling ordered snapshot 
deletion
   2) User deleted S2 ---> creates an edit log entry
   3) User deleted S1 
   4) Snapshot Deletion GC thread will now delete S2 ---> creates an edit log 
entry again
   5) Now turn off Ordered Snapshot Deletion and Restart
   
   It will run into the same problem again. We cannot seem to avoid 2 edit log 
entries for same snapshot delete with Snapshot GC thread running and deleting 
the snapshot.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 489404)
Time Spent: 50m  (was: 40m)

> namenode fails to start when ordered snapshot deletion feature is disabled
> --
>
> Key: HDFS-15590
> URL: https://issues.apache.org/jira/browse/HDFS-15590
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code:java}
> 1. Enabled ordered deletion snapshot feature.
> 2. Created snapshottable directory - /user/hrt_6/atrr_dir1
> 3. Created snapshots s0, s1, s2.
> 4. Deleted snapshot s2
> 5. Delete snapshot s0, s1, s2 again
> 6. Disable ordered deletion snapshot feature
> 5. Restart Namenode
> Failed to start namenode.
> org.apache.hadoop.hdfs.protocol.SnapshotException: Cannot delete snapshot s2 
> from path /user/hrt_6/atrr_dir2: the snapshot does not exist.
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.removeSnapshot(DirectorySnapshottableFeature.java:237)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.removeSnapshot(INodeDirectory.java:293)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:510)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:819)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:182)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:912)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:760)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1164)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:755)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:646)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:717)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:960)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:933)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1670)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1737)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org