[jira] [Moved] (HDFS-15694) Avoid calling UpdateHeartBeatState inside DataNodeDescriptor

2020-11-23 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein moved HADOOP-17393 to HDFS-15694:
---

Key: HDFS-15694  (was: HADOOP-17393)
Project: Hadoop HDFS  (was: Hadoop Common)

> Avoid calling UpdateHeartBeatState inside DataNodeDescriptor
> 
>
> Key: HDFS-15694
> URL: https://issues.apache.org/jira/browse/HDFS-15694
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [~kshukla] reported that {{DataNodeDescriptor}} constructor calls 
> {{updateHeartBeat}} which is spamming the NN logs. This call does not update 
> much since all the fields for the call are null or 0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15689) allow/disallowSnapshot on EZ roots shouldn't fail due to trash provisioning/emptiness check

2020-11-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15689?focusedWorklogId=515901&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-515901
 ]

ASF GitHub Bot logged work on HDFS-15689:
-

Author: ASF GitHub Bot
Created on: 24/Nov/20 05:07
Start Date: 24/Nov/20 05:07
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2472:
URL: https://github.com/apache/hadoop/pull/2472#issuecomment-732656039


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   2m  8s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 53s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  29m 21s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   5m 10s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 49s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   2m 11s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 22s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m  6s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   4m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 53s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   3m 53s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 58s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m 50s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 15s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 116m 23s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2472/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 253m 20s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestMultipleNNPortQOP |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2472/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2472 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b27cb725bf62 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 9b4faf2b51a |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-

[jira] [Created] (HDFS-15693) Improve native code's performance when writing to HDFS

2020-11-23 Thread Jira
István Fajth created HDFS-15693:
---

 Summary: Improve native code's performance when writing to HDFS
 Key: HDFS-15693
 URL: https://issues.apache.org/jira/browse/HDFS-15693
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: fuse-dfs, native
Reporter: István Fajth


For reads, we introduced direct buffers in order to more efficiently 
communicate between the JVM and the native code, and we have readDirect and 
pReadDirect in hdfs.c implemented.

Writes on the other hand still use the putByteArrayRegion call, which results 
in a copy of the buffer in memory.

This Jira is to explore what has to be done in order to start to use direct 
buffers.
A short initial list I see at the moment:
- add a new StreamCapability for streams wanting to support writes via direct 
buffer
- implement this capability in the DFSOutputStream and DFSStripedOutputStream
- implement a writeDirect method on the native side

fuse_dfs can benefit from this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15692) Improve fuse_dfs read performace

2020-11-23 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-15692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

István Fajth updated HDFS-15692:

Summary: Improve fuse_dfs read performace  (was: Improve furse_dfs read 
performace)

> Improve fuse_dfs read performace
> 
>
> Key: HDFS-15692
> URL: https://issues.apache.org/jira/browse/HDFS-15692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fuse-dfs
>Reporter: István Fajth
>Priority: Major
>
> Currently fuse_dfs uses a prefetch buffer to read from HDFS via libhdfs' 
> pread method.
> The algorithm inside fuse_read.c in short does the following:
>  if the rdbuffer size is less then the buffer provided
>  then
>   reads directly to the buffer
>  else
>   grab lock
>     if the preftch buffer does not have more data
>     then
>       fills the prefetch buffer
>     endif
>     fills the supplied buffer via memcpy from the prefetch buffer
>   release lock
> endif
> It would be nice to have a background thread and double prefetch buffers, so 
> while one buffer serves the reads coming from the local client, the other can 
> prefetch the data, with that we can improve the read speed, especially with 
> EC encoded files.
> According to some measurements I did, if I increase the read buffer, there is 
> a significant change in runtime, with 64MB the runtime is really closer to 
> HDFS by a large margin. Interestingly 128MB as the buffer size does not 
> perform well, but 256MB is even more closer to what the dfs client can 
> provide. (16 vs 18 seconds with rep3 files, while in par with ec encoded 
> files dfs vs fuse)
> So it seems it is worth to stream continuously a larger chunk of data, at 
> least with pread, but in case we have a separate fetching thread and double 
> buffering, we don't even need positioned reads, simply just continuous 
> streaming of data with read.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15692) Improve furse_dfs read performace

2020-11-23 Thread Jira
István Fajth created HDFS-15692:
---

 Summary: Improve furse_dfs read performace
 Key: HDFS-15692
 URL: https://issues.apache.org/jira/browse/HDFS-15692
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: fuse-dfs
Reporter: István Fajth


Currently fuse_dfs uses a prefetch buffer to read from HDFS via libhdfs' pread 
method.

The algorithm inside fuse_read.c in short does the following:
 if the rdbuffer size is less then the buffer provided
 then
  reads directly to the buffer
 else
  grab lock
    if the preftch buffer does not have more data
    then
      fills the prefetch buffer
    endif
    fills the supplied buffer via memcpy from the prefetch buffer
  release lock
endif

It would be nice to have a background thread and double prefetch buffers, so 
while one buffer serves the reads coming from the local client, the other can 
prefetch the data, with that we can improve the read speed, especially with EC 
encoded files.

According to some measurements I did, if I increase the read buffer, there is a 
significant change in runtime, with 64MB the runtime is really closer to HDFS 
by a large margin. Interestingly 128MB as the buffer size does not perform 
well, but 256MB is even more closer to what the dfs client can provide. (16 vs 
18 seconds with rep3 files, while in par with ec encoded files dfs vs fuse)

So it seems it is worth to stream continuously a larger chunk of data, at least 
with pread, but in case we have a separate fetching thread and double 
buffering, we don't even need positioned reads, simply just continuous 
streaming of data with read.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15296) TestBPOfferService#testMissBlocksWhenReregister is flaky

2020-11-23 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki resolved HDFS-15296.
-
Resolution: Duplicate

I'm closing this as duplicate of HDFS-15654.

> TestBPOfferService#testMissBlocksWhenReregister is flaky
> 
>
> Key: HDFS-15296
> URL: https://issues.apache.org/jira/browse/HDFS-15296
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 3.4.0
>Reporter: Mingliang Liu
>Priority: Major
>
> TestBPOfferService.testMissBlocksWhenReregister fails intermittently in 
> {{trunk}} branch, not sure about other branches. Example failures are
> - 
> https://builds.apache.org/job/hadoop-multibranch/job/PR-1964/4/testReport/org.apache.hadoop.hdfs.server.datanode/TestBPOfferService/testMissBlocksWhenReregister/
> - 
> https://builds.apache.org/job/PreCommit-HDFS-Build/29175/testReport/org.apache.hadoop.hdfs.server.datanode/TestBPOfferService/testMissBlocksWhenReregister/
> Sample exception stack is:
> {quote}
> Stacktrace
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestBPOfferService.testMissBlocksWhenReregister(TestBPOfferService.java:350)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15691) Fix flaky test TestServerWebApp.getHomeDir

2020-11-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15691?focusedWorklogId=515859&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-515859
 ]

ASF GitHub Bot logged work on HDFS-15691:
-

Author: ASF GitHub Bot
Created on: 24/Nov/20 01:34
Start Date: 24/Nov/20 01:34
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on a change in pull request #2482:
URL: https://github.com/apache/hadoop/pull/2482#discussion_r529110847



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/servlet/TestServerWebApp.java
##
@@ -43,6 +43,8 @@ public void getHomeDir() {
 assertEquals(ServerWebApp.getDir("TestServerWebApp0", ".log.dir", 
"/tmp/log"), "/tmp/log");
 System.setProperty("TestServerWebApp0.log.dir", "/tmplog");
 assertEquals(ServerWebApp.getDir("TestServerWebApp0", ".log.dir", 
"/tmp/log"), "/tmplog");
+System.clearProperty("TestServerWebApp0.home.dir");
+System.clearProperty("TestServerWebApp0.log.dir");

Review comment:
   @lzx404243 Nice catch!
   
   If the test fails in L42, L43, or L45, `System.clearProperty` does not 
executed and the properties are not cleared. The property should be cleared in 
`@After` method or be cleared at the first of the test.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 515859)
Time Spent: 20m  (was: 10m)

> Fix flaky test TestServerWebApp.getHomeDir
> --
>
> Key: HDFS-15691
> URL: https://issues.apache.org/jira/browse/HDFS-15691
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: macOS 10.15.6
> java version "1.8.0_151"
> Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)
> Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f)
>  
>Reporter: Zhengxi Li
>Assignee: Zhengxi Li
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HDFS-15691.001.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The test '{{org.apache.hadoop.lib.servlet.TestServerWebApp.getHomeDir'}} is 
> not idempotent and fails if run twice in the same JVM, because it pollutes 
> state shared among tests. It may be good to clean this state pollution so 
> that some other tests do not fail in the future due to the shared state 
> polluted by this test.
>  
> PR link: https://github.com/apache/hadoop/pull/2482



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15689) allow/disallowSnapshot on EZ roots shouldn't fail due to trash provisioning/emptiness check

2020-11-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15689?focusedWorklogId=515849&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-515849
 ]

ASF GitHub Bot logged work on HDFS-15689:
-

Author: ASF GitHub Bot
Created on: 24/Nov/20 00:48
Start Date: 24/Nov/20 00:48
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #2472:
URL: https://github.com/apache/hadoop/pull/2472#issuecomment-732511074


   @bshashikant  Would you take another look? The new revision doesn't throw 
when the trash exists and has the correct permission.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 515849)
Time Spent: 1.5h  (was: 1h 20m)

> allow/disallowSnapshot on EZ roots shouldn't fail due to trash 
> provisioning/emptiness check
> ---
>
> Key: HDFS-15689
> URL: https://issues.apache.org/jira/browse/HDFS-15689
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> h2. Background
> 1. HDFS-15607 added a feature that when 
> {{dfs.namenode.snapshot.trashroot.enabled=true}}, allowSnapshot will 
> automatically create a .Trash directory immediately after allowSnapshot 
> operation so files deleted will be moved into the trash root inside the 
> snapshottable directory.
> 2. HDFS-15539 prevents admins from disallowing snapshot if the trash root 
> inside is not empty
> h2. Problem
> 1. When {{dfs.namenode.snapshot.trashroot.enabled=true}}, currently if the 
> directory (to be allowed snapshot on) is an EZ root, it throws 
> {{FileAlreadyExistsException}} because the trash root already exists 
> (encryption zone has already created an internal trash root).
> 2. Similarly, at the moment if we disallow snapshot on an EZ root, it may 
> complain that the trash root is not empty (or delete it if empty, which is 
> not desired since EZ will still need it).
> h2. Solution
> 1. Let allowSnapshot succeed by not throwing {{FileAlreadyExistsException}}, 
> but informs the admin that the trash already exists.
> 2. Ignore {{checkTrashRootAndRemoveIfEmpty()}} check if path is EZ root.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15689) allow/disallowSnapshot on EZ roots shouldn't fail due to trash provisioning/emptiness check

2020-11-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15689?focusedWorklogId=515830&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-515830
 ]

ASF GitHub Bot logged work on HDFS-15689:
-

Author: ASF GitHub Bot
Created on: 24/Nov/20 00:09
Start Date: 24/Nov/20 00:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2472:
URL: https://github.com/apache/hadoop/pull/2472#issuecomment-732498290


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  38m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 46s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 28s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 16s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   4m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 15s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 11s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 41s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   4m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 56s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   3m 56s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m  0s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   6m 29s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 21s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 132m  3s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2472/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 298m 16s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.TestDecommissionWithBackoffMonitor |
   |   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2472/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2472 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 59f8b5d87dda 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 07b

[jira] [Work logged] (HDFS-15689) allow/disallowSnapshot on EZ roots shouldn't fail due to trash provisioning/emptiness check

2020-11-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15689?focusedWorklogId=515614&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-515614
 ]

ASF GitHub Bot logged work on HDFS-15689:
-

Author: ASF GitHub Bot
Created on: 23/Nov/20 15:31
Start Date: 23/Nov/20 15:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2472:
URL: https://github.com/apache/hadoop/pull/2472#issuecomment-732234993


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 11s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 47s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   4m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 13s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 11s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 40s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   4m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 48s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   3m 48s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 59s |  |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s | 
[/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2472/3/artifact/out/whitespace-eol.txt)
 |  The patch has 1 line(s) that end in whitespace. Use git apply 
--whitespace=fix <>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  16m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m 50s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 14s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 116m  7s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2472/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 240m  8s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestDistributedFileSystem |
   |   | hadoop.hdfs.TestViewDistributedFileSystem |
   |   | hadoop.hdfs.TestErasureCodingExerciseAPIs |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2472/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2472 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux beba14b988fb 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build 

[jira] [Work logged] (HDFS-14904) Option to let Balancer prefer top used nodes in each iteration

2020-11-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14904?focusedWorklogId=515442&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-515442
 ]

ASF GitHub Bot logged work on HDFS-14904:
-

Author: ASF GitHub Bot
Created on: 23/Nov/20 09:42
Start Date: 23/Nov/20 09:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2483:
URL: https://github.com/apache/hadoop/pull/2483#issuecomment-732045016


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  5s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 13s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 11s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 15s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 112m 43s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2483/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 209m 56s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2483/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2483 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d25910e75893 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 641d8856d20 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2483/2/testReport/ |
   | Max. process+thread count | 2936 (vs. ulimit of 5500)

[jira] [Commented] (HDFS-15655) Add option to make balancer prefer to get cold blocks

2020-11-23 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17237226#comment-17237226
 ] 

Yang Yun commented on HDFS-15655:
-

[~hexiaoqiao], I can not get the detailed exception for this is old issue in 
our customer's hbase setup.  They often handle with many big temporary files 
with cached block location information. if the balancer is running, The 
availability of services is severely reduced. They dare not use balancer for a 
long time. After this fix,  the balancer can run again without impacting the 
availability.

Thanks [~ayushtkn] for your explanation。

> Add option to make balancer prefer to get cold blocks
> -
>
> Key: HDFS-15655
> URL: https://issues.apache.org/jira/browse/HDFS-15655
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15655.001.patch, HDFS-15655.002.patch, 
> HDFS-15655.003.patch, HDFS-15655.004.patch
>
>
> We met two issues when using balancer.
>  # Moving hot files may cause failing of dfsclient reading.
>  # Some blocks of temporary files are moved and they are deleted soon.
> Add a config key  " dfs.namenode.hot.block.time.interval",  the balancer 
> prefer to get the blocks which are belong to the cold files created before 
> this time period.
> Also add a option "-hotBlockTimeInterval" to the command line of balancer for 
> inputting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org