[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=489354=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-489354
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 23/Sep/20 04:56
Start Date: 23/Sep/20 04:56
Worklog Time Spent: 10m 
  Work Description: liuml07 commented on pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#issuecomment-697078022


   No more comments on the latest commit.
   
   Let's wait for the QA and run failing tests locally. I'll commit shortly.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 489354)
Time Spent: 10h  (was: 9h 50m)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 10h
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15548) Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15548?focusedWorklogId=489357=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-489357
 ]

ASF GitHub Bot logged work on HDFS-15548:
-

Author: ASF GitHub Bot
Created on: 23/Sep/20 04:56
Start Date: 23/Sep/20 04:56
Worklog Time Spent: 10m 
  Work Description: Hexiaoqiao commented on pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#issuecomment-697130352


   Sorry, I missed this message before. Will have another review later today. 
Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 489357)
Time Spent: 4h  (was: 3h 50m)

> Allow configuring DISK/ARCHIVE storage types on same device mount
> -
>
> Key: HDFS-15548
> URL: https://issues.apache.org/jira/browse/HDFS-15548
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> We can allow configuring DISK/ARCHIVE storage types on the same device mount 
> on two separate directories.
> Users should be able to configure the capacity for each. Also, the datanode 
> usage report should report stats correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=489320=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-489320
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 23/Sep/20 04:53
Start Date: 23/Sep/20 04:53
Worklog Time Spent: 10m 
  Work Description: huangtianhua commented on pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#issuecomment-697068485


   @liuml07 Sorry, we didn't catch you, what other changes we have to do? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 489320)
Time Spent: 9h 50m  (was: 9h 40m)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 9h 50m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15590) namenode fails to start when ordered snapshot deletion feature is disabled

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15590?focusedWorklogId=489311=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-489311
 ]

ASF GitHub Bot logged work on HDFS-15590:
-

Author: ASF GitHub Bot
Created on: 23/Sep/20 04:52
Start Date: 23/Sep/20 04:52
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2326:
URL: https://github.com/apache/hadoop/pull/2326#issuecomment-696610845


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 39s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 34s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 40s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m 16s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 19s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 17s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 15s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m 12s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m  2s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 43s |  hadoop-hdfs-project/hadoop-hdfs: 
The patch generated 1 new + 20 unchanged - 0 fixed = 21 total (was 20)  |
   | +1 :green_heart: |  mvnsite  |   1m  8s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m  2s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m  6s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  97m 15s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 190m 42s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.web.TestWebHDFS |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestFileChecksum |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2326 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 1286642aaefc 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6b5d9e2334b |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 

[jira] [Work logged] (HDFS-15590) namenode fails to start when ordered snapshot deletion feature is disabled

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15590?focusedWorklogId=489269=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-489269
 ]

ASF GitHub Bot logged work on HDFS-15590:
-

Author: ASF GitHub Bot
Created on: 23/Sep/20 04:48
Start Date: 23/Sep/20 04:48
Worklog Time Spent: 10m 
  Work Description: bshashikant opened a new pull request #2326:
URL: https://github.com/apache/hadoop/pull/2326


   please see https://issues.apache.org/jira/browse/HDFS-15590
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 489269)
Time Spent: 0.5h  (was: 20m)

> namenode fails to start when ordered snapshot deletion feature is disabled
> --
>
> Key: HDFS-15590
> URL: https://issues.apache.org/jira/browse/HDFS-15590
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {code:java}
> 1. Enabled ordered deletion snapshot feature.
> 2. Created snapshottable directory - /user/hrt_6/atrr_dir1
> 3. Created snapshots s0, s1, s2.
> 4. Deleted snapshot s2
> 5. Delete snapshot s0, s1, s2 again
> 6. Disable ordered deletion snapshot feature
> 5. Restart Namenode
> Failed to start namenode.
> org.apache.hadoop.hdfs.protocol.SnapshotException: Cannot delete snapshot s2 
> from path /user/hrt_6/atrr_dir2: the snapshot does not exist.
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.removeSnapshot(DirectorySnapshottableFeature.java:237)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.removeSnapshot(INodeDirectory.java:293)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:510)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:819)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:182)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:912)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:760)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1164)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:755)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:646)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:717)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:960)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:933)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1670)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1737)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=489210=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-489210
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 23/Sep/20 04:44
Start Date: 23/Sep/20 04:44
Worklog Time Spent: 10m 
  Work Description: liuml07 commented on a change in pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#discussion_r492866266



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsVolumeList.java
##
@@ -202,6 +205,14 @@ public void testDfsReservedForDifferentStorageTypes() 
throws IOException {
 .setConf(conf)
 .build();
 assertEquals("", 100L, volume4.getReserved());
+FsVolumeImpl volume5 = new FsVolumeImplBuilder().setDataset(dataset)
+.setStorageDirectory(
+new StorageDirectory(
+StorageLocation.parse("[NVDIMM]"+volDir.getPath(
+.setStorageID("storage-id")
+.setConf(conf)
+.build();
+assertEquals("", 3L, volume5.getReserved());

Review comment:
   Usually we can, but following original code style here is bad. When it 
fails, the original code gives up empty string. My code shows your expected 
value and actual value so you can debug. Please change it.
   
   We can also file a JIRA to update all such cases where `assertEquals` can be 
improved.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 489210)
Time Spent: 9h 40m  (was: 9.5h)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 9h 40m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15548) Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15548?focusedWorklogId=489085=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-489085
 ]

ASF GitHub Bot logged work on HDFS-15548:
-

Author: ASF GitHub Bot
Created on: 23/Sep/20 04:33
Start Date: 23/Sep/20 04:33
Worklog Time Spent: 10m 
  Work Description: LeonGao91 commented on pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#issuecomment-696393867


   @Hexiaoqiao Would you please take a second look? I have added a check as we 
discussed with UT.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 489085)
Time Spent: 3h 50m  (was: 3h 40m)

> Allow configuring DISK/ARCHIVE storage types on same device mount
> -
>
> Key: HDFS-15548
> URL: https://issues.apache.org/jira/browse/HDFS-15548
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> We can allow configuring DISK/ARCHIVE storage types on the same device mount 
> on two separate directories.
> Users should be able to configure the capacity for each. Also, the datanode 
> usage report should report stats correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=489075=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-489075
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 23/Sep/20 04:32
Start Date: 23/Sep/20 04:32
Worklog Time Spent: 10m 
  Work Description: YaYun-Wang commented on a change in pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#discussion_r492598805



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsVolumeList.java
##
@@ -202,6 +205,14 @@ public void testDfsReservedForDifferentStorageTypes() 
throws IOException {
 .setConf(conf)
 .build();
 assertEquals("", 100L, volume4.getReserved());
+FsVolumeImpl volume5 = new FsVolumeImplBuilder().setDataset(dataset)
+.setStorageDirectory(
+new StorageDirectory(
+StorageLocation.parse("[NVDIMM]"+volDir.getPath(
+.setStorageID("storage-id")
+.setConf(conf)
+.build();
+assertEquals("", 3L, volume5.getReserved());

Review comment:
   In order to be consistent with the original code, the `assertEquals()`  
here has three parameters, such as, lines  196 and 204 of the original code.
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 489075)
Time Spent: 9.5h  (was: 9h 20m)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 9.5h
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-13009) Creation of Encryption zone should succeed even if directory is not empty.

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13009?focusedWorklogId=488988=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-488988
 ]

ASF GitHub Bot logged work on HDFS-13009:
-

Author: ASF GitHub Bot
Created on: 23/Sep/20 04:26
Start Date: 23/Sep/20 04:26
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2328:
URL: https://github.com/apache/hadoop/pull/2328#issuecomment-696975349


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 47s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 11s |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m  8s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   2m 11s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 17s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 59s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   5m 16s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 12s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 51s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m  2s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   2m  2s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 38s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m 38s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 57s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  21m 57s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   4m 59s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 149m 27s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 277m  7s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
   |   | hadoop.cli.TestCryptoAdminCLI |
   |   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
   |   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.TestDecommissionWithStriped |
   |   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
   |   | hadoop.hdfs.TestGetFileChecksum |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2328/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2328 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 14452f5a5b90 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 474fa80bfb1 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 

[jira] [Work logged] (HDFS-15557) Log the reason why a storage log file can't be deleted

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15557?focusedWorklogId=488996=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-488996
 ]

ASF GitHub Bot logged work on HDFS-15557:
-

Author: ASF GitHub Bot
Created on: 23/Sep/20 04:26
Start Date: 23/Sep/20 04:26
Worklog Time Spent: 10m 
  Work Description: goiri commented on pull request #2274:
URL: https://github.com/apache/hadoop/pull/2274#issuecomment-696913823


   @liuml07 any further comments?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 488996)
Time Spent: 2h 50m  (was: 2h 40m)

> Log the reason why a storage log file can't be deleted
> --
>
> Key: HDFS-15557
> URL: https://issues.apache.org/jira/browse/HDFS-15557
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ye Ni
>Assignee: Ye Ni
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Before
>  
> {code:java}
> 2020-09-02 06:48:31,983 WARN [IPC Server handler 206 on 8020] 
> org.apache.hadoop.hdfs.server.common.Storage: writeTransactionIdToStorage 
> failed on Storage Directory root= K:\data\hdfs\namenode; location= null; 
> type= IMAGE; isShared= false; lock= 
> sun.nio.ch.FileLockImpl[0:9223372036854775807 exclusive valid]; storageUuid= 
> null java.io.IOException: Could not delete original file 
> K:\data\hdfs\namenode\current\seen_txid{code}
>  
> After
>  
> {code:java}
> 2020-09-02 17:43:29,421 WARN [IPC Server handler 111 on 8020] 
> org.apache.hadoop.hdfs.server.common.Storage: writeTransactionIdToStorage 
> failed on Storage Directory root= K:\data\hdfs\namenode; location= null; 
> type= IMAGE; isShared= false; lock= 
> sun.nio.ch.FileLockImpl[0:9223372036854775807 exclusive valid]; storageUuid= 
> null java.io.IOException: Could not delete original file 
> K:\data\hdfs\namenode\current\seen_txid due to failure: 
> java.nio.file.FileSystemException: K:\data\hdfs\namenode\current\seen_txid: 
> The process cannot access the file because it is being used by another 
> process.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-13009) Creation of Encryption zone should succeed even if directory is not empty.

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13009?focusedWorklogId=488973=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-488973
 ]

ASF GitHub Bot logged work on HDFS-13009:
-

Author: ASF GitHub Bot
Created on: 23/Sep/20 04:24
Start Date: 23/Sep/20 04:24
Worklog Time Spent: 10m 
  Work Description: zehaoc2 opened a new pull request #2328:
URL: https://github.com/apache/hadoop/pull/2328


   This is a change we (Verizon Media) have been running within production for 
2 years
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 488973)
Time Spent: 0.5h  (was: 20m)

> Creation of Encryption zone should succeed even if directory is not empty.
> --
>
> Key: HDFS-13009
> URL: https://issues.apache.org/jira/browse/HDFS-13009
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption
>Reporter: Rushabh Shah
>Assignee: Rushabh Shah
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently we have a restriction that creation of encryption zone can be done 
> only on an empty directory.
> This jira is to remove that restriction.
> Motivation:
> New customers who wants to start using Encryption zone can make an existing 
> directory encrypted.
> They will be able to read the old data as it is  and will be decrypting the 
> newly written data.
> Internally we have many customers asking for this feature.
> Currently they have to ask for more space quota, encrypt the old data.
> This will make their life much more easier.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=488934=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-488934
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 23/Sep/20 04:21
Start Date: 23/Sep/20 04:21
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#issuecomment-696742400







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 488934)
Time Spent: 9h 20m  (was: 9h 10m)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 9h 20m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15557) Log the reason why a storage log file can't be deleted

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15557?focusedWorklogId=488869=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-488869
 ]

ASF GitHub Bot logged work on HDFS-15557:
-

Author: ASF GitHub Bot
Created on: 23/Sep/20 04:15
Start Date: 23/Sep/20 04:15
Worklog Time Spent: 10m 
  Work Description: goiri merged pull request #2274:
URL: https://github.com/apache/hadoop/pull/2274


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 488869)
Time Spent: 2h 40m  (was: 2.5h)

> Log the reason why a storage log file can't be deleted
> --
>
> Key: HDFS-15557
> URL: https://issues.apache.org/jira/browse/HDFS-15557
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ye Ni
>Assignee: Ye Ni
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Before
>  
> {code:java}
> 2020-09-02 06:48:31,983 WARN [IPC Server handler 206 on 8020] 
> org.apache.hadoop.hdfs.server.common.Storage: writeTransactionIdToStorage 
> failed on Storage Directory root= K:\data\hdfs\namenode; location= null; 
> type= IMAGE; isShared= false; lock= 
> sun.nio.ch.FileLockImpl[0:9223372036854775807 exclusive valid]; storageUuid= 
> null java.io.IOException: Could not delete original file 
> K:\data\hdfs\namenode\current\seen_txid{code}
>  
> After
>  
> {code:java}
> 2020-09-02 17:43:29,421 WARN [IPC Server handler 111 on 8020] 
> org.apache.hadoop.hdfs.server.common.Storage: writeTransactionIdToStorage 
> failed on Storage Directory root= K:\data\hdfs\namenode; location= null; 
> type= IMAGE; isShared= false; lock= 
> sun.nio.ch.FileLockImpl[0:9223372036854775807 exclusive valid]; storageUuid= 
> null java.io.IOException: Could not delete original file 
> K:\data\hdfs\namenode\current\seen_txid due to failure: 
> java.nio.file.FileSystemException: K:\data\hdfs\namenode\current\seen_txid: 
> The process cannot access the file because it is being used by another 
> process.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15557) Log the reason why a storage log file can't be deleted

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15557?focusedWorklogId=488786=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-488786
 ]

ASF GitHub Bot logged work on HDFS-15557:
-

Author: ASF GitHub Bot
Created on: 23/Sep/20 04:09
Start Date: 23/Sep/20 04:09
Worklog Time Spent: 10m 
  Work Description: liuml07 commented on pull request #2274:
URL: https://github.com/apache/hadoop/pull/2274#issuecomment-696915170


   +1
   
   Not sure why the QA is not coming back cleanly here...not related to the 
change.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 488786)
Time Spent: 2.5h  (was: 2h 20m)

> Log the reason why a storage log file can't be deleted
> --
>
> Key: HDFS-15557
> URL: https://issues.apache.org/jira/browse/HDFS-15557
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ye Ni
>Assignee: Ye Ni
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Before
>  
> {code:java}
> 2020-09-02 06:48:31,983 WARN [IPC Server handler 206 on 8020] 
> org.apache.hadoop.hdfs.server.common.Storage: writeTransactionIdToStorage 
> failed on Storage Directory root= K:\data\hdfs\namenode; location= null; 
> type= IMAGE; isShared= false; lock= 
> sun.nio.ch.FileLockImpl[0:9223372036854775807 exclusive valid]; storageUuid= 
> null java.io.IOException: Could not delete original file 
> K:\data\hdfs\namenode\current\seen_txid{code}
>  
> After
>  
> {code:java}
> 2020-09-02 17:43:29,421 WARN [IPC Server handler 111 on 8020] 
> org.apache.hadoop.hdfs.server.common.Storage: writeTransactionIdToStorage 
> failed on Storage Directory root= K:\data\hdfs\namenode; location= null; 
> type= IMAGE; isShared= false; lock= 
> sun.nio.ch.FileLockImpl[0:9223372036854775807 exclusive valid]; storageUuid= 
> null java.io.IOException: Could not delete original file 
> K:\data\hdfs\namenode\current\seen_txid due to failure: 
> java.nio.file.FileSystemException: K:\data\hdfs\namenode\current\seen_txid: 
> The process cannot access the file because it is being used by another 
> process.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=488728=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-488728
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 23/Sep/20 02:06
Start Date: 23/Sep/20 02:06
Worklog Time Spent: 10m 
  Work Description: liuml07 commented on pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#issuecomment-697078022


   No more comments on the latest commit.
   
   Let's wait for the QA and run failing tests locally. I'll commit shortly.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 488728)
Time Spent: 9h 10m  (was: 9h)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 9h 10m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=488704=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-488704
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 23/Sep/20 01:29
Start Date: 23/Sep/20 01:29
Worklog Time Spent: 10m 
  Work Description: huangtianhua commented on pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#issuecomment-697068485


   @liuml07 Sorry, we didn't catch you, what other changes we have to do? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 488704)
Time Spent: 9h  (was: 8h 50m)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 9h
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-13009) Creation of Encryption zone should succeed even if directory is not empty.

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13009?focusedWorklogId=488593=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-488593
 ]

ASF GitHub Bot logged work on HDFS-13009:
-

Author: ASF GitHub Bot
Created on: 22/Sep/20 20:55
Start Date: 22/Sep/20 20:55
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2328:
URL: https://github.com/apache/hadoop/pull/2328#issuecomment-696975349


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 47s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 11s |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m  8s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   2m 11s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 17s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 59s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   5m 16s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 12s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 51s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m  2s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   2m  2s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 38s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m 38s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 57s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  21m 57s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   4m 59s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 149m 27s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 277m  7s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
   |   | hadoop.cli.TestCryptoAdminCLI |
   |   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
   |   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.TestDecommissionWithStriped |
   |   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
   |   | hadoop.hdfs.TestGetFileChecksum |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2328/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2328 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 14452f5a5b90 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 474fa80bfb1 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 

[jira] [Updated] (HDFS-15557) Log the reason why a storage log file can't be deleted

2020-09-22 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-15557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-15557:
---
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Log the reason why a storage log file can't be deleted
> --
>
> Key: HDFS-15557
> URL: https://issues.apache.org/jira/browse/HDFS-15557
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ye Ni
>Assignee: Ye Ni
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Before
>  
> {code:java}
> 2020-09-02 06:48:31,983 WARN [IPC Server handler 206 on 8020] 
> org.apache.hadoop.hdfs.server.common.Storage: writeTransactionIdToStorage 
> failed on Storage Directory root= K:\data\hdfs\namenode; location= null; 
> type= IMAGE; isShared= false; lock= 
> sun.nio.ch.FileLockImpl[0:9223372036854775807 exclusive valid]; storageUuid= 
> null java.io.IOException: Could not delete original file 
> K:\data\hdfs\namenode\current\seen_txid{code}
>  
> After
>  
> {code:java}
> 2020-09-02 17:43:29,421 WARN [IPC Server handler 111 on 8020] 
> org.apache.hadoop.hdfs.server.common.Storage: writeTransactionIdToStorage 
> failed on Storage Directory root= K:\data\hdfs\namenode; location= null; 
> type= IMAGE; isShared= false; lock= 
> sun.nio.ch.FileLockImpl[0:9223372036854775807 exclusive valid]; storageUuid= 
> null java.io.IOException: Could not delete original file 
> K:\data\hdfs\namenode\current\seen_txid due to failure: 
> java.nio.file.FileSystemException: K:\data\hdfs\namenode\current\seen_txid: 
> The process cannot access the file because it is being used by another 
> process.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15557) Log the reason why a storage log file can't be deleted

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15557?focusedWorklogId=488576=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-488576
 ]

ASF GitHub Bot logged work on HDFS-15557:
-

Author: ASF GitHub Bot
Created on: 22/Sep/20 20:23
Start Date: 22/Sep/20 20:23
Worklog Time Spent: 10m 
  Work Description: goiri merged pull request #2274:
URL: https://github.com/apache/hadoop/pull/2274


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 488576)
Time Spent: 2h 20m  (was: 2h 10m)

> Log the reason why a storage log file can't be deleted
> --
>
> Key: HDFS-15557
> URL: https://issues.apache.org/jira/browse/HDFS-15557
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ye Ni
>Assignee: Ye Ni
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Before
>  
> {code:java}
> 2020-09-02 06:48:31,983 WARN [IPC Server handler 206 on 8020] 
> org.apache.hadoop.hdfs.server.common.Storage: writeTransactionIdToStorage 
> failed on Storage Directory root= K:\data\hdfs\namenode; location= null; 
> type= IMAGE; isShared= false; lock= 
> sun.nio.ch.FileLockImpl[0:9223372036854775807 exclusive valid]; storageUuid= 
> null java.io.IOException: Could not delete original file 
> K:\data\hdfs\namenode\current\seen_txid{code}
>  
> After
>  
> {code:java}
> 2020-09-02 17:43:29,421 WARN [IPC Server handler 111 on 8020] 
> org.apache.hadoop.hdfs.server.common.Storage: writeTransactionIdToStorage 
> failed on Storage Directory root= K:\data\hdfs\namenode; location= null; 
> type= IMAGE; isShared= false; lock= 
> sun.nio.ch.FileLockImpl[0:9223372036854775807 exclusive valid]; storageUuid= 
> null java.io.IOException: Could not delete original file 
> K:\data\hdfs\namenode\current\seen_txid due to failure: 
> java.nio.file.FileSystemException: K:\data\hdfs\namenode\current\seen_txid: 
> The process cannot access the file because it is being used by another 
> process.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15415) Reduce locking in Datanode DirectoryScanner

2020-09-22 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200308#comment-17200308
 ] 

Hadoop QA commented on HDFS-15415:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
26s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} branch-3.2 passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
46s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
44s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 39 unchanged - 1 fixed = 39 total (was 40) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}111m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}180m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestRedudantBlocks |
|   | hadoop.hdfs.server.namenode.TestCacheDirectives |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/200/artifact/out/Dockerfile
 |
| JIRA Issue | HDFS-15415 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13011966/HDFS-15415.branch-3.2.002.patch
 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux bb2eaea37c66 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Work logged] (HDFS-15557) Log the reason why a storage log file can't be deleted

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15557?focusedWorklogId=488515=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-488515
 ]

ASF GitHub Bot logged work on HDFS-15557:
-

Author: ASF GitHub Bot
Created on: 22/Sep/20 18:55
Start Date: 22/Sep/20 18:55
Worklog Time Spent: 10m 
  Work Description: liuml07 commented on pull request #2274:
URL: https://github.com/apache/hadoop/pull/2274#issuecomment-696915170


   +1
   
   Not sure why the QA is not coming back cleanly here...not related to the 
change.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 488515)
Time Spent: 2h 10m  (was: 2h)

> Log the reason why a storage log file can't be deleted
> --
>
> Key: HDFS-15557
> URL: https://issues.apache.org/jira/browse/HDFS-15557
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ye Ni
>Assignee: Ye Ni
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Before
>  
> {code:java}
> 2020-09-02 06:48:31,983 WARN [IPC Server handler 206 on 8020] 
> org.apache.hadoop.hdfs.server.common.Storage: writeTransactionIdToStorage 
> failed on Storage Directory root= K:\data\hdfs\namenode; location= null; 
> type= IMAGE; isShared= false; lock= 
> sun.nio.ch.FileLockImpl[0:9223372036854775807 exclusive valid]; storageUuid= 
> null java.io.IOException: Could not delete original file 
> K:\data\hdfs\namenode\current\seen_txid{code}
>  
> After
>  
> {code:java}
> 2020-09-02 17:43:29,421 WARN [IPC Server handler 111 on 8020] 
> org.apache.hadoop.hdfs.server.common.Storage: writeTransactionIdToStorage 
> failed on Storage Directory root= K:\data\hdfs\namenode; location= null; 
> type= IMAGE; isShared= false; lock= 
> sun.nio.ch.FileLockImpl[0:9223372036854775807 exclusive valid]; storageUuid= 
> null java.io.IOException: Could not delete original file 
> K:\data\hdfs\namenode\current\seen_txid due to failure: 
> java.nio.file.FileSystemException: K:\data\hdfs\namenode\current\seen_txid: 
> The process cannot access the file because it is being used by another 
> process.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15557) Log the reason why a storage log file can't be deleted

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15557?focusedWorklogId=488509=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-488509
 ]

ASF GitHub Bot logged work on HDFS-15557:
-

Author: ASF GitHub Bot
Created on: 22/Sep/20 18:53
Start Date: 22/Sep/20 18:53
Worklog Time Spent: 10m 
  Work Description: goiri commented on pull request #2274:
URL: https://github.com/apache/hadoop/pull/2274#issuecomment-696913823


   @liuml07 any further comments?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 488509)
Time Spent: 2h  (was: 1h 50m)

> Log the reason why a storage log file can't be deleted
> --
>
> Key: HDFS-15557
> URL: https://issues.apache.org/jira/browse/HDFS-15557
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ye Ni
>Assignee: Ye Ni
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Before
>  
> {code:java}
> 2020-09-02 06:48:31,983 WARN [IPC Server handler 206 on 8020] 
> org.apache.hadoop.hdfs.server.common.Storage: writeTransactionIdToStorage 
> failed on Storage Directory root= K:\data\hdfs\namenode; location= null; 
> type= IMAGE; isShared= false; lock= 
> sun.nio.ch.FileLockImpl[0:9223372036854775807 exclusive valid]; storageUuid= 
> null java.io.IOException: Could not delete original file 
> K:\data\hdfs\namenode\current\seen_txid{code}
>  
> After
>  
> {code:java}
> 2020-09-02 17:43:29,421 WARN [IPC Server handler 111 on 8020] 
> org.apache.hadoop.hdfs.server.common.Storage: writeTransactionIdToStorage 
> failed on Storage Directory root= K:\data\hdfs\namenode; location= null; 
> type= IMAGE; isShared= false; lock= 
> sun.nio.ch.FileLockImpl[0:9223372036854775807 exclusive valid]; storageUuid= 
> null java.io.IOException: Could not delete original file 
> K:\data\hdfs\namenode\current\seen_txid due to failure: 
> java.nio.file.FileSystemException: K:\data\hdfs\namenode\current\seen_txid: 
> The process cannot access the file because it is being used by another 
> process.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15098) Add SM4 encryption method for HDFS

2020-09-22 Thread Vinayakumar B (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200295#comment-17200295
 ] 

Vinayakumar B commented on HDFS-15098:
--

+1, test failures and other things are unrelated.

Will wait for one/two days before commit, if any one needs to take a look.

Thanks [~seanlau] for the update on patch.

> Add SM4 encryption method for HDFS
> --
>
> Key: HDFS-15098
> URL: https://issues.apache.org/jira/browse/HDFS-15098
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 3.4.0
>Reporter: liusheng
>Assignee: liusheng
>Priority: Major
>  Labels: sm4
> Attachments: HDFS-15098.001.patch, HDFS-15098.002.patch, 
> HDFS-15098.003.patch, HDFS-15098.004.patch, HDFS-15098.005.patch, 
> HDFS-15098.006.patch, HDFS-15098.007.patch, HDFS-15098.008.patch, 
> HDFS-15098.009.patch, image-2020-08-19-16-54-41-341.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> SM4 (formerly SMS4)is a block cipher used in the Chinese National Standard 
> for Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure).
>  SM4 was a cipher proposed to for the IEEE 802.11i standard, but has so far 
> been rejected by ISO. One of the reasons for the rejection has been 
> opposition to the WAPI fast-track proposal by the IEEE. please see:
> [https://en.wikipedia.org/wiki/SM4_(cipher)]
>  
> *Use sm4 on hdfs as follows:*
> 1.Configure Hadoop KMS
>  2.test HDFS sm4
>  hadoop key create key1 -cipher 'SM4/CTR/NoPadding'
>  hdfs dfs -mkdir /benchmarks
>  hdfs crypto -createZone -keyName key1 -path /benchmarks
> *requires:*
>  1.openssl version >=1.1.1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15513) Allow client to query snapshot status on one directory

2020-09-22 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200250#comment-17200250
 ] 

Siyao Meng commented on HDFS-15513:
---

Hi [~jianghuazhu], thanks for the comment.

The goal here is to reduce the overhead of getting the entire snapshot listing, 
so we can query the snapshottable status of only the list of directories the 
client is interested in.

{{DFSClient#getSnapshotListing}} eventually calls into 
{{FSNamesystem#getSnapshottableDirListing}}, the latter of which I mentioned in 
the description. It always returns the full listing. The overhead grows when 
there are more snapshottable directories.

> Allow client to query snapshot status on one directory
> --
>
> Key: HDFS-15513
> URL: https://issues.apache.org/jira/browse/HDFS-15513
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, hdfs-client
>Affects Versions: 3.3.0
>Reporter: Siyao Meng
>Priority: Major
>
> Alternatively, we can allow the client to query snapshot status on *a list 
> of* given directories by the client. Thoughts?
> Rationale:
> At the moment, we could only retrieve the full list of snapshottable 
> directories with 
> [{{getSnapshottableDirListing()}}|https://github.com/apache/hadoop/blob/233619a0a462ae2eb7e7253b6bb8ae48eaa5eb19/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L6986-L6994].
>  This leads to the inefficiency In HDFS-15492 that we have to get the 
> *entire* list of snapshottable directory to check if a file being deleted is 
> inside a snapshottable directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=488388=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-488388
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 22/Sep/20 16:17
Start Date: 22/Sep/20 16:17
Worklog Time Spent: 10m 
  Work Description: liuml07 commented on a change in pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#discussion_r492866266



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsVolumeList.java
##
@@ -202,6 +205,14 @@ public void testDfsReservedForDifferentStorageTypes() 
throws IOException {
 .setConf(conf)
 .build();
 assertEquals("", 100L, volume4.getReserved());
+FsVolumeImpl volume5 = new FsVolumeImplBuilder().setDataset(dataset)
+.setStorageDirectory(
+new StorageDirectory(
+StorageLocation.parse("[NVDIMM]"+volDir.getPath(
+.setStorageID("storage-id")
+.setConf(conf)
+.build();
+assertEquals("", 3L, volume5.getReserved());

Review comment:
   Usually we can, but following original code style here is bad. When it 
fails, the original code gives up empty string. My code shows your expected 
value and actual value so you can debug. Please change it.
   
   We can also file a JIRA to update all such cases where `assertEquals` can be 
improved.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 488388)
Time Spent: 8h 50m  (was: 8h 40m)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 8h 50m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15581) Access Controlled HTTPFS Proxy

2020-09-22 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200188#comment-17200188
 ] 

Kihwal Lee commented on HDFS-15581:
---

I've committed this to trunk, branch-3.3 and branch-3.2. Thanks for working on 
this, [~richard-ross].

> Access Controlled HTTPFS Proxy
> --
>
> Key: HDFS-15581
> URL: https://issues.apache.org/jira/browse/HDFS-15581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.4.0
>Reporter: Richard
>Assignee: Richard
>Priority: Minor
> Fix For: 3.2.2, 3.3.1, 3.4.0
>
> Attachments: HADOOP-17244.001.patch
>
>
> There are certain data migration patterns that require a way to limit access 
> to the HDFS via the HTTPFS proxy.  The needed access modes are read-write, 
> read-only and write-only.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-13009) Creation of Encryption zone should succeed even if directory is not empty.

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13009?focusedWorklogId=488387=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-488387
 ]

ASF GitHub Bot logged work on HDFS-13009:
-

Author: ASF GitHub Bot
Created on: 22/Sep/20 16:16
Start Date: 22/Sep/20 16:16
Worklog Time Spent: 10m 
  Work Description: zehaoc2 opened a new pull request #2328:
URL: https://github.com/apache/hadoop/pull/2328


   This is a change we (Verizon Media) have been running within production for 
2 years
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 488387)
Remaining Estimate: 0h
Time Spent: 10m

> Creation of Encryption zone should succeed even if directory is not empty.
> --
>
> Key: HDFS-13009
> URL: https://issues.apache.org/jira/browse/HDFS-13009
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption
>Reporter: Rushabh Shah
>Assignee: Rushabh Shah
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently we have a restriction that creation of encryption zone can be done 
> only on an empty directory.
> This jira is to remove that restriction.
> Motivation:
> New customers who wants to start using Encryption zone can make an existing 
> directory encrypted.
> They will be able to read the old data as it is  and will be decrypting the 
> newly written data.
> Internally we have many customers asking for this feature.
> Currently they have to ask for more space quota, encrypt the old data.
> This will make their life much more easier.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13009) Creation of Encryption zone should succeed even if directory is not empty.

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-13009:
--
Labels: pull-request-available  (was: )

> Creation of Encryption zone should succeed even if directory is not empty.
> --
>
> Key: HDFS-13009
> URL: https://issues.apache.org/jira/browse/HDFS-13009
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption
>Reporter: Rushabh Shah
>Assignee: Rushabh Shah
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently we have a restriction that creation of encryption zone can be done 
> only on an empty directory.
> This jira is to remove that restriction.
> Motivation:
> New customers who wants to start using Encryption zone can make an existing 
> directory encrypted.
> They will be able to read the old data as it is  and will be decrypting the 
> newly written data.
> Internally we have many customers asking for this feature.
> Currently they have to ask for more space quota, encrypt the old data.
> This will make their life much more easier.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15581) Access Controlled HTTPFS Proxy

2020-09-22 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-15581:
--
Fix Version/s: 3.4.0
   3.3.1
   3.2.2
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Access Controlled HTTPFS Proxy
> --
>
> Key: HDFS-15581
> URL: https://issues.apache.org/jira/browse/HDFS-15581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.4.0
>Reporter: Richard
>Assignee: Richard
>Priority: Minor
> Fix For: 3.2.2, 3.3.1, 3.4.0
>
> Attachments: HADOOP-17244.001.patch
>
>
> There are certain data migration patterns that require a way to limit access 
> to the HDFS via the HTTPFS proxy.  The needed access modes are read-write, 
> read-only and write-only.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15415) Reduce locking in Datanode DirectoryScanner

2020-09-22 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15415:
-
Attachment: HDFS-15415.branch-3.2.002.patch

> Reduce locking in Datanode DirectoryScanner
> ---
>
> Key: HDFS-15415
> URL: https://issues.apache.org/jira/browse/HDFS-15415
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15415.001.patch, HDFS-15415.002.patch, 
> HDFS-15415.003.patch, HDFS-15415.004.patch, HDFS-15415.005.patch, 
> HDFS-15415.branch-3.2.001.patch, HDFS-15415.branch-3.2.002.patch, 
> HDFS-15415.branch-3.3.001.patch
>
>
> In HDFS-15406, we have a small change to greatly reduce the runtime and 
> locking time of the datanode DirectoryScanner. They may be room for further 
> improvement.
> From the scan step, we have captured a snapshot of what is on disk. After 
> calling `dataset.getFinalizedBlocks(bpid);` we have taken a snapshot of in 
> memory. The two snapshots are never 100% in sync as things are always 
> changing as the disk is scanned.
> We are only comparing finalized blocks, so they should not really change:
> * If a block is deleted after our snapshot, our snapshot will not see it and 
> that is OK.
> * A finalized block could be appended. If that happens both the genstamp and 
> length will change, but that should be handled by reconcile when it calls 
> `FSDatasetImpl.checkAndUpdate()`, and there is nothing stopping blocks being 
> appended after they have been scanned from disk, but before they have been 
> compared with memory.
> My suspicion is that we can do all the comparison work outside of the lock 
> and checkAndUpdate() re-checks any differences later under the lock on a 
> block by block basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15382) Split FsDatasetImpl from blockpool lock to blockpool volume lock

2020-09-22 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200157#comment-17200157
 ] 

Xiaoqiao He commented on HDFS-15382:


[~weichiu],[~sodonnell],[~linyiqun] do you have time to review this solution? 
It works well in our internal cluster. I believe this is useful feature if we 
can push it forward.
Any suggestions and comments are welcome. We will prepare new patch based on 
trunk if come to an agreement on this solution.

> Split FsDatasetImpl from blockpool lock to blockpool volume lock 
> -
>
> Key: HDFS-15382
> URL: https://issues.apache.org/jira/browse/HDFS-15382
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Aiphago
>Assignee: Aiphago
>Priority: Major
> Fix For: 3.2.1
>
> Attachments: HDFS-15382-sample.patch, image-2020-06-02-1.png, 
> image-2020-06-03-1.png
>
>
> In HDFS-15180 we split lock to blockpool grain size.But when one volume is in 
> heavy load and will block other request which in same blockpool but different 
> volume.So we split lock to two leval to avoid this happend.And to improve 
> datanode performance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=488331=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-488331
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 22/Sep/20 15:15
Start Date: 22/Sep/20 15:15
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#issuecomment-696788468


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 35s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  buf  |   0m  1s |  buf was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
16 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 26s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  31m 35s |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 47s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  21m 59s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m 25s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 35s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 36s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 37s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 44s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  2s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   9m 38s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 12s |  the patch passed  |
   | +1 :green_heart: |  compile  |  27m 37s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  cc  |  27m 37s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 22 new + 141 unchanged - 
22 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  javac  |  27m 37s |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m  0s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  cc  |  24m  0s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 25 new + 138 unchanged - 
25 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  javac  |  24m  0s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m 34s |  root: The patch generated 16 new 
+ 723 unchanged - 6 fixed = 739 total (was 729)  |
   | +1 :green_heart: |  mvnsite  |   4m 59s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  19m 21s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 56s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 47s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |  11m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  11m 45s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m 39s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 107m 50s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  7s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 356m 14s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestHAAppend |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestDFSClientRetries |
   |   | hadoop.hdfs.TestStripedFileAppend |
   |   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | 

[jira] [Commented] (HDFS-15581) Access Controlled HTTPFS Proxy

2020-09-22 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200135#comment-17200135
 ] 

Kihwal Lee commented on HDFS-15581:
---

+1 The patch looks good. The documentation in {{httpfs-default.xml}} is also 
adequate. It will be linked from 
{{hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/markdown/ServerSetup.md.vm}} 
when doc is generated.


> Access Controlled HTTPFS Proxy
> --
>
> Key: HDFS-15581
> URL: https://issues.apache.org/jira/browse/HDFS-15581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.4.0
>Reporter: Richard
>Assignee: Richard
>Priority: Minor
> Attachments: HADOOP-17244.001.patch
>
>
> There are certain data migration patterns that require a way to limit access 
> to the HDFS via the HTTPFS proxy.  The needed access modes are read-write, 
> read-only and write-only.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15415) Reduce locking in Datanode DirectoryScanner

2020-09-22 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200128#comment-17200128
 ] 

Hadoop QA commented on HDFS-15415:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 22m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
59s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} branch-3.2 passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
56s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
54s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 39 unchanged - 1 fixed = 41 total (was 40) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 52s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}204m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.TestRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/199/artifact/out/Dockerfile
 |
| JIRA Issue | HDFS-15415 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13011918/HDFS-15415.branch-3.2.001.patch
 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 8d865470d1b7 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | 

[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=488312=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-488312
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 22/Sep/20 14:51
Start Date: 22/Sep/20 14:51
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#issuecomment-696772998


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 14s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  buf  |   0m  0s |  buf was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
16 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  29m 35s |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 47s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  19m 58s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m 14s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 51s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 24s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 16s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 46s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 56s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   9m  1s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 36s |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 13s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  cc  |  24m 13s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 27 new + 136 unchanged - 
27 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  javac  |  24m 13s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m  3s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  cc  |  20m  3s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 30 new + 133 unchanged - 
30 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  javac  |  20m  3s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m 12s |  root: The patch generated 4 new 
+ 725 unchanged - 4 fixed = 729 total (was 729)  |
   | +1 :green_heart: |  mvnsite  |   3m 52s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 49s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 16s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 44s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   9m 53s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  10m 33s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 29s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 132m 31s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  1s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 356m 13s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.security.TestRaceWhenRelogin |
   |   | hadoop.hdfs.TestQuota |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestDecommissionWithBackoffMonitor |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.TestFileChecksum |
   
   
   | Subsystem | Report/Notes |
   

[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=488308=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-488308
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 22/Sep/20 14:45
Start Date: 22/Sep/20 14:45
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#issuecomment-696769048


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 10s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  buf  |   0m  1s |  buf was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
16 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  30m  3s |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m  5s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  20m 41s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m 12s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 14s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 31s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  9s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 41s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 54s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   8m 43s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 15s |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m  1s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  cc  |  24m  1s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 10 new + 153 unchanged - 
10 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  javac  |  24m  1s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 48s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  cc  |  20m 48s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 6 new + 157 unchanged - 
6 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  javac  |  20m 48s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m  7s |  root: The patch generated 4 new 
+ 725 unchanged - 4 fixed = 729 total (was 729)  |
   | +1 :green_heart: |  mvnsite  |   4m 11s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  16m 31s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  9s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 39s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   9m 18s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 19s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m 18s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 134m 19s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  7s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 357m 54s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.TestFileChecksum |
   
   
   | Subsystem | Report/Notes |
   

[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=488300=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-488300
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 22/Sep/20 14:28
Start Date: 22/Sep/20 14:28
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#issuecomment-696758750


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  4s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  buf  |   0m  1s |  buf was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
16 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 16s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 33s |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 57s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m 38s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m  3s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 48s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m  2s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 10s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 38s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 37s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   8m  7s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 49s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m  8s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  cc  |  20m  8s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 30 new + 133 unchanged - 
30 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  javac  |  20m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 39s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  cc  |  17m 39s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 5 new + 158 unchanged - 
5 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  javac  |  17m 39s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m  2s |  root: The patch generated 16 new 
+ 723 unchanged - 6 fixed = 739 total (was 729)  |
   | +1 :green_heart: |  mvnsite  |   3m 50s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  16m 22s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 11s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 35s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   8m 42s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 10s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m 15s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 113m 52s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 57s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 319m 19s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | 

[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=488297=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-488297
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 22/Sep/20 14:26
Start Date: 22/Sep/20 14:26
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#issuecomment-696757028


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 44s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  buf  |   0m  1s |  buf was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
16 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 32s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 58s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 42s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m 16s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m  2s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m  5s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 51s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 22s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 48s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 41s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   8m 14s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m  6s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 55s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  cc  |  20m 55s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 9 new + 154 unchanged - 
9 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  javac  |  20m 55s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 15s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  cc  |  18m 15s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 14 new + 149 unchanged - 
14 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  javac  |  18m 15s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 59s |  root: The patch generated 16 new 
+ 725 unchanged - 6 fixed = 741 total (was 731)  |
   | +1 :green_heart: |  mvnsite  |   4m  9s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 20s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 29s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 49s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   8m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 10s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m 30s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 113m 41s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 11s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 319m 59s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestByteBufferPread |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshot |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.TestFileAppend4 |
   |   | 

[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=488291=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-488291
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 22/Sep/20 14:21
Start Date: 22/Sep/20 14:21
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#issuecomment-696754216


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  buf  |   0m  0s |  buf was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
16 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 28s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 55s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 10s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m 38s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m  0s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 55s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 39s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 22s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 50s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 38s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   8m 16s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 53s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 54s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  cc  |  20m 54s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 35 new + 128 unchanged - 
35 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  javac  |  20m 54s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 50s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  cc  |  18m 50s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 34 new + 129 unchanged - 
34 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  javac  |  18m 50s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 58s |  root: The patch generated 4 new 
+ 727 unchanged - 4 fixed = 731 total (was 731)  |
   | +1 :green_heart: |  mvnsite  |   4m  2s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m  6s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 21s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   4m  1s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   8m 48s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 47s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m 24s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 115m 39s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  5s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 321m 38s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.mover.TestStorageMover |
   |   | hadoop.hdfs.TestDatanodeLayoutUpgrade |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshot |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.TestMultiThreadedHflush |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | 

[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=488282=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-488282
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 22/Sep/20 14:02
Start Date: 22/Sep/20 14:02
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#issuecomment-696742400


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  buf  |   0m  1s |  buf was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
16 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 25s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  29m 17s |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 31s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  20m 46s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m 27s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 25s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 57s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 37s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 17s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  3s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   9m  9s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 40s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  cc  |  22m 40s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 32 new + 131 unchanged - 
32 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  javac  |  22m 40s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 59s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  cc  |  18m 59s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 7 new + 156 unchanged - 
7 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  javac  |  18m 59s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 57s |  root: The patch generated 4 new 
+ 725 unchanged - 4 fixed = 729 total (was 729)  |
   | +1 :green_heart: |  mvnsite  |   4m  4s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 14s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 28s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 49s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   8m 37s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 30s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m 18s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  |  97m 36s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  6s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 318m  7s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.TestGetFileChecksum |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.server.datanode.TestBPOfferService |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.TestFileChecksum |
   
   
  

[jira] [Commented] (HDFS-15415) Reduce locking in Datanode DirectoryScanner

2020-09-22 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200013#comment-17200013
 ] 

Stephen O'Donnell commented on HDFS-15415:
--

I committed the branch-3.3 patch and upload a new patch for branch-3.2. Its 
basically the same change - removing the lock block around the entire scan 
processing loop.

> Reduce locking in Datanode DirectoryScanner
> ---
>
> Key: HDFS-15415
> URL: https://issues.apache.org/jira/browse/HDFS-15415
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15415.001.patch, HDFS-15415.002.patch, 
> HDFS-15415.003.patch, HDFS-15415.004.patch, HDFS-15415.005.patch, 
> HDFS-15415.branch-3.2.001.patch, HDFS-15415.branch-3.3.001.patch
>
>
> In HDFS-15406, we have a small change to greatly reduce the runtime and 
> locking time of the datanode DirectoryScanner. They may be room for further 
> improvement.
> From the scan step, we have captured a snapshot of what is on disk. After 
> calling `dataset.getFinalizedBlocks(bpid);` we have taken a snapshot of in 
> memory. The two snapshots are never 100% in sync as things are always 
> changing as the disk is scanned.
> We are only comparing finalized blocks, so they should not really change:
> * If a block is deleted after our snapshot, our snapshot will not see it and 
> that is OK.
> * A finalized block could be appended. If that happens both the genstamp and 
> length will change, but that should be handled by reconcile when it calls 
> `FSDatasetImpl.checkAndUpdate()`, and there is nothing stopping blocks being 
> appended after they have been scanned from disk, but before they have been 
> compared with memory.
> My suspicion is that we can do all the comparison work outside of the lock 
> and checkAndUpdate() re-checks any differences later under the lock on a 
> block by block basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15415) Reduce locking in Datanode DirectoryScanner

2020-09-22 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15415:
-
Attachment: HDFS-15415.branch-3.2.001.patch

> Reduce locking in Datanode DirectoryScanner
> ---
>
> Key: HDFS-15415
> URL: https://issues.apache.org/jira/browse/HDFS-15415
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15415.001.patch, HDFS-15415.002.patch, 
> HDFS-15415.003.patch, HDFS-15415.004.patch, HDFS-15415.005.patch, 
> HDFS-15415.branch-3.2.001.patch, HDFS-15415.branch-3.3.001.patch
>
>
> In HDFS-15406, we have a small change to greatly reduce the runtime and 
> locking time of the datanode DirectoryScanner. They may be room for further 
> improvement.
> From the scan step, we have captured a snapshot of what is on disk. After 
> calling `dataset.getFinalizedBlocks(bpid);` we have taken a snapshot of in 
> memory. The two snapshots are never 100% in sync as things are always 
> changing as the disk is scanned.
> We are only comparing finalized blocks, so they should not really change:
> * If a block is deleted after our snapshot, our snapshot will not see it and 
> that is OK.
> * A finalized block could be appended. If that happens both the genstamp and 
> length will change, but that should be handled by reconcile when it calls 
> `FSDatasetImpl.checkAndUpdate()`, and there is nothing stopping blocks being 
> appended after they have been scanned from disk, but before they have been 
> compared with memory.
> My suspicion is that we can do all the comparison work outside of the lock 
> and checkAndUpdate() re-checks any differences later under the lock on a 
> block by block basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15415) Reduce locking in Datanode DirectoryScanner

2020-09-22 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15415:
-
Fix Version/s: 3.3.1

> Reduce locking in Datanode DirectoryScanner
> ---
>
> Key: HDFS-15415
> URL: https://issues.apache.org/jira/browse/HDFS-15415
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15415.001.patch, HDFS-15415.002.patch, 
> HDFS-15415.003.patch, HDFS-15415.004.patch, HDFS-15415.005.patch, 
> HDFS-15415.branch-3.3.001.patch
>
>
> In HDFS-15406, we have a small change to greatly reduce the runtime and 
> locking time of the datanode DirectoryScanner. They may be room for further 
> improvement.
> From the scan step, we have captured a snapshot of what is on disk. After 
> calling `dataset.getFinalizedBlocks(bpid);` we have taken a snapshot of in 
> memory. The two snapshots are never 100% in sync as things are always 
> changing as the disk is scanned.
> We are only comparing finalized blocks, so they should not really change:
> * If a block is deleted after our snapshot, our snapshot will not see it and 
> that is OK.
> * A finalized block could be appended. If that happens both the genstamp and 
> length will change, but that should be handled by reconcile when it calls 
> `FSDatasetImpl.checkAndUpdate()`, and there is nothing stopping blocks being 
> appended after they have been scanned from disk, but before they have been 
> compared with memory.
> My suspicion is that we can do all the comparison work outside of the lock 
> and checkAndUpdate() re-checks any differences later under the lock on a 
> block by block basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15583) Backport DirectoryScanner improvements HDFS-14476, HDFS-14751 and HDFS-15048 to branch 3.2 and 3.1

2020-09-22 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15583:
-
Fix Version/s: 3.2.3
   3.1.5
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to branch-3.2 and branch-3.1 with no conflicts too. Thanks for the 
review [~weichiu]

> Backport DirectoryScanner improvements HDFS-14476, HDFS-14751 and HDFS-15048 
> to branch 3.2 and 3.1
> --
>
> Key: HDFS-15583
> URL: https://issues.apache.org/jira/browse/HDFS-15583
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0, 3.2.1
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.1.5, 3.2.3
>
> Attachments: HDFS-15583.branch-3.2.001.patch
>
>
> HDFS-14476, HDFS-14751 and HDFS-15048 made some good improvements to the 
> datanode DirectoryScanner, but due to a large refactor on that class in 
> branch-3.3, they are not trivial to backport to earlier branches.
> HDFS-14476 introduced the problem in HDFS-14751 and a findbugs warning, fixed 
> in HDFS-15048, so these 3 need to be backported together.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?focusedWorklogId=488112=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-488112
 ]

ASF GitHub Bot logged work on HDFS-15025:
-

Author: ASF GitHub Bot
Created on: 22/Sep/20 09:32
Start Date: 22/Sep/20 09:32
Worklog Time Spent: 10m 
  Work Description: YaYun-Wang commented on a change in pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#discussion_r492598805



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsVolumeList.java
##
@@ -202,6 +205,14 @@ public void testDfsReservedForDifferentStorageTypes() 
throws IOException {
 .setConf(conf)
 .build();
 assertEquals("", 100L, volume4.getReserved());
+FsVolumeImpl volume5 = new FsVolumeImplBuilder().setDataset(dataset)
+.setStorageDirectory(
+new StorageDirectory(
+StorageLocation.parse("[NVDIMM]"+volDir.getPath(
+.setStorageID("storage-id")
+.setConf(conf)
+.build();
+assertEquals("", 3L, volume5.getReserved());

Review comment:
   In order to be consistent with the original code, the `assertEquals()`  
here has three parameters, such as, lines  196 and 204 of the original code.
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 488112)
Time Spent: 7.5h  (was: 7h 20m)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15590) namenode fails to start when ordered snapshot deletion feature is disabled

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15590?focusedWorklogId=488111=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-488111
 ]

ASF GitHub Bot logged work on HDFS-15590:
-

Author: ASF GitHub Bot
Created on: 22/Sep/20 09:30
Start Date: 22/Sep/20 09:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2326:
URL: https://github.com/apache/hadoop/pull/2326#issuecomment-696610845


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 39s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 34s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 40s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m 16s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 19s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 17s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 15s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m 12s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m  2s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 43s |  hadoop-hdfs-project/hadoop-hdfs: 
The patch generated 1 new + 20 unchanged - 0 fixed = 21 total (was 20)  |
   | +1 :green_heart: |  mvnsite  |   1m  8s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m  2s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m  6s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  97m 15s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 190m 42s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.web.TestWebHDFS |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestFileChecksum |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2326 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 1286642aaefc 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6b5d9e2334b |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 

[jira] [Issue Comment Deleted] (HDFS-15513) Allow client to query snapshot status on one directory

2020-09-22 Thread JiangHua Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JiangHua Zhu updated HDFS-15513:

Comment: was deleted

(was: We can add more fine-grained interfaces that external users can access, 
such as: obtaining specific information through the snapshot name.
 It can be implemented in 2 places:
 1. In the DFSClient class;
 2. Can be accessed by command;
 [~elgoiri] , [~hemanthboyina] , do you have other ideas?

 )

> Allow client to query snapshot status on one directory
> --
>
> Key: HDFS-15513
> URL: https://issues.apache.org/jira/browse/HDFS-15513
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, hdfs-client
>Affects Versions: 3.3.0
>Reporter: Siyao Meng
>Priority: Major
>
> Alternatively, we can allow the client to query snapshot status on *a list 
> of* given directories by the client. Thoughts?
> Rationale:
> At the moment, we could only retrieve the full list of snapshottable 
> directories with 
> [{{getSnapshottableDirListing()}}|https://github.com/apache/hadoop/blob/233619a0a462ae2eb7e7253b6bb8ae48eaa5eb19/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L6986-L6994].
>  This leads to the inefficiency In HDFS-15492 that we have to get the 
> *entire* list of snapshottable directory to check if a file being deleted is 
> inside a snapshottable directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15513) Allow client to query snapshot status on one directory

2020-09-22 Thread JiangHua Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17199938#comment-17199938
 ] 

JiangHua Zhu commented on HDFS-15513:
-

Hi, [~smeng] , here is a method (DFSClient#getSnapshotListing()) to find 
snapshot data according to the directory.
I don't know if it is what you want.

> Allow client to query snapshot status on one directory
> --
>
> Key: HDFS-15513
> URL: https://issues.apache.org/jira/browse/HDFS-15513
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, hdfs-client
>Affects Versions: 3.3.0
>Reporter: Siyao Meng
>Priority: Major
>
> Alternatively, we can allow the client to query snapshot status on *a list 
> of* given directories by the client. Thoughts?
> Rationale:
> At the moment, we could only retrieve the full list of snapshottable 
> directories with 
> [{{getSnapshottableDirListing()}}|https://github.com/apache/hadoop/blob/233619a0a462ae2eb7e7253b6bb8ae48eaa5eb19/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L6986-L6994].
>  This leads to the inefficiency In HDFS-15492 that we have to get the 
> *entire* list of snapshottable directory to check if a file being deleted is 
> inside a snapshottable directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15589) Huge PostponedMisreplicatedBlocks can't decrease immediately when start namenode after datanode

2020-09-22 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17199910#comment-17199910
 ] 

zhengchenyu commented on HDFS-15589:


[~hexiaoqiao]
Yes, in theroy, postponedMisreplicatedBlocks only compat fuction 
'rescanPostponedMisreplicatedBlocks', and it use namesystem's writeLock, then 
may decrease namnode rpc performance. But 
dfs.namenode.blocks.per.postponedblocks.rescan’s default value is 1, so I 
think it may result to little performance.
But let us see some log, some called wast long time.
{code}
hadoop-hdfs-namenode-bd-tz-hadoop-001012.ke.com.log.info.9:2020-09-21 
15:20:15,429 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
Rescan of postponedMisreplicatedBlocks completed in 65 msecs. 19916 blocks are 
left. 0 blocks were removed.
hadoop-hdfs-namenode-bd-tz-hadoop-001012.ke.com.log.info.9:2020-09-21 
15:20:18,496 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
Rescan of postponedMisreplicatedBlocks completed in 64 msecs. 19916 blocks are 
left. 0 blocks were removed.
hadoop-hdfs-namenode-bd-tz-hadoop-001012.ke.com.log.info.9:2020-09-21 
15:20:23,958 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
Rescan of postponedMisreplicatedBlocks completed in 2459 msecs. 19916 blocks 
are left. 0 blocks were removed.
hadoop-hdfs-namenode-bd-tz-hadoop-001012.ke.com.log.info.9:2020-09-21 
15:20:27,023 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
Rescan of postponedMisreplicatedBlocks completed in 60 msecs. 19916 blocks are 
left. 0 blocks were removed.
hadoop-hdfs-namenode-bd-tz-hadoop-001012.ke.com.log.info.9:2020-09-21 
15:20:30,088 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
Rescan of postponedMisreplicatedBlocks completed in 61 msecs. 19916 blocks are 
left. 0 blocks were removed.
hadoop-hdfs-namenode-bd-tz-hadoop-001012.ke.com.log.info.9:2020-09-21 
15:20:33,149 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
Rescan of postponedMisreplicatedBlocks completed in 58 msecs. 19916 blocks are 
left. 0 blocks were removed.
hadoop-hdfs-namenode-bd-tz-hadoop-001012.ke.com.log.info.9:2020-09-21 
15:20:47,890 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
Rescan of postponedMisreplicatedBlocks completed in 5140 msecs. 19916 blocks 
are left. 0 blocks were removed.
hadoop-hdfs-namenode-bd-tz-hadoop-001012.ke.com.log.info.9:2020-09-21 
15:32:36,458 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
Rescan of postponedMisreplicatedBlocks completed in 110 msecs. 19916 blocks are 
left. 0 blocks were removed.
hadoop-hdfs-namenode-bd-tz-hadoop-001012.ke.com.log.info.9:2020-09-21 
15:32:39,529 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
Rescan of postponedMisreplicatedBlocks completed in 70 msecs. 19916 blocks are 
left. 0 blocks were removed.
hadoop-hdfs-namenode-bd-tz-hadoop-001012.ke.com.log.info.9:2020-09-21 
15:32:42,596 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
Rescan of postponedMisreplicatedBlocks completed in 66 msecs. 19916 blocks are 
left. 0 blocks were removed.
hadoop-hdfs-namenode-bd-tz-hadoop-001012.ke.com.log.info.9:2020-09-21 
15:32:45,665 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
Rescan of postponedMisreplicatedBlocks completed in 65 msecs. 19916 blocks are 
left. 0 blocks were removed.
{code}
In fact, it found in our test cluster, a very small cluster, can't detect 
performace. But why I pay attention to this problem? My last comanpy, some day 
postponedMisreplicatedBlocks increase huge, then namenode rpc performane 
decrease. Then some hours laster, postponedMisreplicatedBlocks decrease, the 
namenode be well again. At that moment, I focus on yarn, so I didn't research 
the namenode log, and then no real truth. 

> Huge PostponedMisreplicatedBlocks can't decrease immediately when start 
> namenode after datanode
> ---
>
> Key: HDFS-15589
> URL: https://issues.apache.org/jira/browse/HDFS-15589
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
> Environment: CentOS 7
>Reporter: zhengchenyu
>Priority: Major
>
> In our test cluster, I restart my namenode. Then I found many 
> PostponedMisreplicatedBlocks which doesn't decrease immediately. 
> I search the log below like this. 
> {code:java}
> 2020-09-21 17:02:37,029 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=c6a9934f-afd4-4437-b976-fed55173ce57, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> 2020-09-21 17:02:37,029 DEBUG BlockStateChange: *BLOCK* 

[jira] [Commented] (HDFS-15582) Reduce NameNode audit log

2020-09-22 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17199883#comment-17199883
 ] 

Hadoop QA commented on HDFS-15582:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m  
9s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 27s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
54s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| 

[jira] [Commented] (HDFS-15589) Huge PostponedMisreplicatedBlocks can't decrease immediately when start namenode after datanode

2020-09-22 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17199870#comment-17199870
 ] 

Xiaoqiao He commented on HDFS-15589:


Thanks [~zhengchenyu] for your report. Just wonder if any impact to NameNode 
when PMB(abbr. `PostponedMisreplicatedBlocks`) keeps large number for long 
time? The largest number of PMB near to 100M in my practice, and I do not meet 
any performance issue with my inner branch. Any issues do you meet? Thanks.

> Huge PostponedMisreplicatedBlocks can't decrease immediately when start 
> namenode after datanode
> ---
>
> Key: HDFS-15589
> URL: https://issues.apache.org/jira/browse/HDFS-15589
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
> Environment: CentOS 7
>Reporter: zhengchenyu
>Priority: Major
>
> In our test cluster, I restart my namenode. Then I found many 
> PostponedMisreplicatedBlocks which doesn't decrease immediately. 
> I search the log below like this. 
> {code:java}
> 2020-09-21 17:02:37,029 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=c6a9934f-afd4-4437-b976-fed55173ce57, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> 2020-09-21 17:02:37,029 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=aee144f1-2082-4bca-a92b-f3c154a71c65, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> 2020-09-21 17:02:37,029 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=d152fa5b-1089-4bfc-b9c4-e3a7d98c7a7b, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> 2020-09-21 17:02:37,156 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=5cffc1fe-ace9-4af8-adfc-6002a7f5565d, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> 2020-09-21 17:02:37,161 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=9980d8e1-b0d9-4657-b97d-c803f82c1459, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> 2020-09-21 17:02:37,197 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=77ff3f5e-37f0-405f-a16c-166311546cae, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> {code}
> Node: test cluster only have 6 datanode.
> You will see the blockreport called before "Marking all datanodes as stale" 
> which is logged by startActiveServices. But 
> DatanodeStorageInfo.blockContentsStale only set to false in blockreport, then 
> startActiveServices set all datnaode to stale node. So the datanodes will 
> keep stale util next blockreport, then PostponedMisreplicatedBlocks keep a 
> huge number.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15589) Huge PostponedMisreplicatedBlocks can't decrease immediately when start namenode after datanode

2020-09-22 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17199847#comment-17199847
 ] 

zhengchenyu edited comment on HDFS-15589 at 9/22/20, 6:24 AM:
--

Yeps, I can solve this problem by trigger block report manually. My means is 
there any need to solve this problem by optimized some logical?

For example make sure new block report which trigger by namenode's heartbeat 
happened after enter active state. 

Because you know when I trigger datanode's block report, means block report 
will occure twice. I thinks there is no need to increase the load to namenode. 
In addition, as I kown, trigger block report manually will block report to all 
namenode, then increase load to all namenode.


was (Author: zhengchenyu):
Yeps, I can solve this problem by trigger block report manually. My means is 
there any need to solve this problem by optimized some logical?

For example make sure new block report which trigger by namenode's heartbeat 
happened after enter active state. 

Because you know when I trigger datanode's block report, means block report 
will occure twice. I thinks there is no need to increase the load to namenode.

> Huge PostponedMisreplicatedBlocks can't decrease immediately when start 
> namenode after datanode
> ---
>
> Key: HDFS-15589
> URL: https://issues.apache.org/jira/browse/HDFS-15589
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
> Environment: CentOS 7
>Reporter: zhengchenyu
>Priority: Major
>
> In our test cluster, I restart my namenode. Then I found many 
> PostponedMisreplicatedBlocks which doesn't decrease immediately. 
> I search the log below like this. 
> {code:java}
> 2020-09-21 17:02:37,029 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=c6a9934f-afd4-4437-b976-fed55173ce57, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> 2020-09-21 17:02:37,029 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=aee144f1-2082-4bca-a92b-f3c154a71c65, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> 2020-09-21 17:02:37,029 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=d152fa5b-1089-4bfc-b9c4-e3a7d98c7a7b, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> 2020-09-21 17:02:37,156 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=5cffc1fe-ace9-4af8-adfc-6002a7f5565d, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> 2020-09-21 17:02:37,161 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=9980d8e1-b0d9-4657-b97d-c803f82c1459, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> 2020-09-21 17:02:37,197 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=77ff3f5e-37f0-405f-a16c-166311546cae, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> {code}
> Node: test cluster only have 6 datanode.
> You will see the blockreport called before "Marking all datanodes as stale" 
> which is logged by startActiveServices. But 
> DatanodeStorageInfo.blockContentsStale only set to false in blockreport, then 
> startActiveServices set all datnaode to stale node. So the datanodes will 
> keep stale util next blockreport, then PostponedMisreplicatedBlocks keep a 
> huge number.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15589) Huge PostponedMisreplicatedBlocks can't decrease immediately when start namenode after datanode

2020-09-22 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17199847#comment-17199847
 ] 

zhengchenyu edited comment on HDFS-15589 at 9/22/20, 6:20 AM:
--

Yeps, I can solve this problem by trigger block report manually. My means is 
there any need to solve this problem by optimized some logical?

For example make sure new block report which trigger by namenode's heartbeat 
happened after enter active state. 

Because you know when I trigger datanode's block report, means block report 
will occure twice. I thinks there is no need to increase the load to namenode.


was (Author: zhengchenyu):
Yeps, I can solve this problem by trigger block report manually. My means is 
there any need to solve this problem by optimized some logical? For example 
make sure new block report which trigger by namenode's heartbeat happened after 
enter active state. 

> Huge PostponedMisreplicatedBlocks can't decrease immediately when start 
> namenode after datanode
> ---
>
> Key: HDFS-15589
> URL: https://issues.apache.org/jira/browse/HDFS-15589
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
> Environment: CentOS 7
>Reporter: zhengchenyu
>Priority: Major
>
> In our test cluster, I restart my namenode. Then I found many 
> PostponedMisreplicatedBlocks which doesn't decrease immediately. 
> I search the log below like this. 
> {code:java}
> 2020-09-21 17:02:37,029 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=c6a9934f-afd4-4437-b976-fed55173ce57, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> 2020-09-21 17:02:37,029 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=aee144f1-2082-4bca-a92b-f3c154a71c65, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> 2020-09-21 17:02:37,029 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=d152fa5b-1089-4bfc-b9c4-e3a7d98c7a7b, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> 2020-09-21 17:02:37,156 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=5cffc1fe-ace9-4af8-adfc-6002a7f5565d, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> 2020-09-21 17:02:37,161 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=9980d8e1-b0d9-4657-b97d-c803f82c1459, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> 2020-09-21 17:02:37,197 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=77ff3f5e-37f0-405f-a16c-166311546cae, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> {code}
> Node: test cluster only have 6 datanode.
> You will see the blockreport called before "Marking all datanodes as stale" 
> which is logged by startActiveServices. But 
> DatanodeStorageInfo.blockContentsStale only set to false in blockreport, then 
> startActiveServices set all datnaode to stale node. So the datanodes will 
> keep stale util next blockreport, then PostponedMisreplicatedBlocks keep a 
> huge number.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15590) namenode fails to start when ordered snapshot deletion feature is disabled

2020-09-22 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-15590:
---
Reporter: Nilotpal Nandi  (was: Shashikant Banerjee)

> namenode fails to start when ordered snapshot deletion feature is disabled
> --
>
> Key: HDFS-15590
> URL: https://issues.apache.org/jira/browse/HDFS-15590
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:java}
> 1. Enabled ordered deletion snapshot feature.
> 2. Created snapshottable directory - /user/hrt_6/atrr_dir1
> 3. Created snapshots s0, s1, s2.
> 4. Deleted snapshot s2
> 5. Delete snapshot s0, s1, s2 again
> 6. Disable ordered deletion snapshot feature
> 5. Restart Namenode
> Failed to start namenode.
> org.apache.hadoop.hdfs.protocol.SnapshotException: Cannot delete snapshot s2 
> from path /user/hrt_6/atrr_dir2: the snapshot does not exist.
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.removeSnapshot(DirectorySnapshottableFeature.java:237)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.removeSnapshot(INodeDirectory.java:293)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:510)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:819)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:182)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:912)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:760)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1164)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:755)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:646)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:717)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:960)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:933)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1670)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1737)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15590) namenode fails to start when ordered snapshot deletion feature is disabled

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15590?focusedWorklogId=487965=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487965
 ]

ASF GitHub Bot logged work on HDFS-15590:
-

Author: ASF GitHub Bot
Created on: 22/Sep/20 06:17
Start Date: 22/Sep/20 06:17
Worklog Time Spent: 10m 
  Work Description: bshashikant opened a new pull request #2326:
URL: https://github.com/apache/hadoop/pull/2326


   please see https://issues.apache.org/jira/browse/HDFS-15590
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487965)
Remaining Estimate: 0h
Time Spent: 10m

> namenode fails to start when ordered snapshot deletion feature is disabled
> --
>
> Key: HDFS-15590
> URL: https://issues.apache.org/jira/browse/HDFS-15590
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:java}
> 1. Enabled ordered deletion snapshot feature.
> 2. Created snapshottable directory - /user/hrt_6/atrr_dir1
> 3. Created snapshots s0, s1, s2.
> 4. Deleted snapshot s2
> 5. Delete snapshot s0, s1, s2 again
> 6. Disable ordered deletion snapshot feature
> 5. Restart Namenode
> Failed to start namenode.
> org.apache.hadoop.hdfs.protocol.SnapshotException: Cannot delete snapshot s2 
> from path /user/hrt_6/atrr_dir2: the snapshot does not exist.
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.removeSnapshot(DirectorySnapshottableFeature.java:237)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.removeSnapshot(INodeDirectory.java:293)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:510)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:819)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:182)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:912)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:760)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1164)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:755)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:646)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:717)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:960)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:933)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1670)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1737)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15590) namenode fails to start when ordered snapshot deletion feature is disabled

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15590:
--
Labels: pull-request-available  (was: )

> namenode fails to start when ordered snapshot deletion feature is disabled
> --
>
> Key: HDFS-15590
> URL: https://issues.apache.org/jira/browse/HDFS-15590
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:java}
> 1. Enabled ordered deletion snapshot feature.
> 2. Created snapshottable directory - /user/hrt_6/atrr_dir1
> 3. Created snapshots s0, s1, s2.
> 4. Deleted snapshot s2
> 5. Delete snapshot s0, s1, s2 again
> 6. Disable ordered deletion snapshot feature
> 5. Restart Namenode
> Failed to start namenode.
> org.apache.hadoop.hdfs.protocol.SnapshotException: Cannot delete snapshot s2 
> from path /user/hrt_6/atrr_dir2: the snapshot does not exist.
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.removeSnapshot(DirectorySnapshottableFeature.java:237)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.removeSnapshot(INodeDirectory.java:293)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:510)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:819)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:182)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:912)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:760)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1164)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:755)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:646)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:717)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:960)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:933)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1670)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1737)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15589) Huge PostponedMisreplicatedBlocks can't decrease immediately when start namenode after datanode

2020-09-22 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17199847#comment-17199847
 ] 

zhengchenyu commented on HDFS-15589:


Yeps, I can solve this problem by trigger block report manually. My means is 
there any need to solve this problem by optimized some logical? For example 
make sure new block report which trigger by namenode's heartbeat happened after 
enter active state. 

> Huge PostponedMisreplicatedBlocks can't decrease immediately when start 
> namenode after datanode
> ---
>
> Key: HDFS-15589
> URL: https://issues.apache.org/jira/browse/HDFS-15589
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
> Environment: CentOS 7
>Reporter: zhengchenyu
>Priority: Major
>
> In our test cluster, I restart my namenode. Then I found many 
> PostponedMisreplicatedBlocks which doesn't decrease immediately. 
> I search the log below like this. 
> {code:java}
> 2020-09-21 17:02:37,029 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=c6a9934f-afd4-4437-b976-fed55173ce57, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> 2020-09-21 17:02:37,029 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=aee144f1-2082-4bca-a92b-f3c154a71c65, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> 2020-09-21 17:02:37,029 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=d152fa5b-1089-4bfc-b9c4-e3a7d98c7a7b, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> 2020-09-21 17:02:37,156 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=5cffc1fe-ace9-4af8-adfc-6002a7f5565d, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> 2020-09-21 17:02:37,161 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=9980d8e1-b0d9-4657-b97d-c803f82c1459, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> 2020-09-21 17:02:37,197 DEBUG BlockStateChange: *BLOCK* NameNode.blockReport: 
> from DatanodeRegistration(xx.xx.xx.xx:9866, 
> datanodeUuid=77ff3f5e-37f0-405f-a16c-166311546cae, infoPort=9864, 
> infoSecurePort=0, ipcPort=9867, 
> storageInfo=lv=-57;cid=CID-9f6d0a32-e51c-459a-9f65-6e7b5791ee25;nsid=1016509846;c=1592578350834),
>  reports.length=12
> {code}
> Node: test cluster only have 6 datanode.
> You will see the blockreport called before "Marking all datanodes as stale" 
> which is logged by startActiveServices. But 
> DatanodeStorageInfo.blockContentsStale only set to false in blockreport, then 
> startActiveServices set all datnaode to stale node. So the datanodes will 
> keep stale util next blockreport, then PostponedMisreplicatedBlocks keep a 
> huge number.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org