[
https://issues.apache.org/jira/browse/HDFS-16146?focusedWorklogId=632302&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-632302
]
ASF GitHub Bot logged work on HDFS-16146:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 02/Aug/21 12:24
Start Date: 02/Aug/21 12:24
Worklog Time Spent: 10m
Work Description: hadoop-yetus commented on pull request #3247:
URL: https://github.com/apache/hadoop/pull/3247#issuecomment-890983486
:broken_heart: **-1 overall**
| Vote | Subsystem | Runtime | Logfile | Comment |
|:----:|----------:|--------:|:--------:|:-------:|
| +0 :ok: | reexec | 0m 40s | | Docker mode activated. |
|||| _ Prechecks _ |
| +1 :green_heart: | dupname | 0m 0s | | No case conflicting files
found. |
| +0 :ok: | codespell | 0m 0s | | codespell was not available. |
| +1 :green_heart: | @author | 0m 0s | | The patch does not contain
any @author tags. |
| +1 :green_heart: | test4tests | 0m 0s | | The patch appears to
include 1 new or modified test files. |
|||| _ trunk Compile Tests _ |
| +0 :ok: | mvndep | 11m 24s | | Maven dependency ordering for branch |
| +1 :green_heart: | mvninstall | 20m 20s | | trunk passed |
| +1 :green_heart: | compile | 4m 54s | | trunk passed with JDK
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 |
| +1 :green_heart: | compile | 4m 34s | | trunk passed with JDK
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
| +1 :green_heart: | checkstyle | 1m 14s | | trunk passed |
| +1 :green_heart: | mvnsite | 2m 23s | | trunk passed |
| +1 :green_heart: | javadoc | 1m 38s | | trunk passed with JDK
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 |
| +1 :green_heart: | javadoc | 2m 11s | | trunk passed with JDK
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
| +1 :green_heart: | spotbugs | 5m 35s | | trunk passed |
| +1 :green_heart: | shadedclient | 16m 22s | | branch has no errors
when building and testing our client artifacts. |
|||| _ Patch Compile Tests _ |
| +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch |
| +1 :green_heart: | mvninstall | 2m 15s | | the patch passed |
| +1 :green_heart: | compile | 4m 56s | | the patch passed with JDK
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 |
| +1 :green_heart: | javac | 4m 56s | | the patch passed |
| +1 :green_heart: | compile | 4m 33s | | the patch passed with JDK
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
| +1 :green_heart: | javac | 4m 33s | | the patch passed |
| +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks
issues. |
| +1 :green_heart: | checkstyle | 1m 5s | | the patch passed |
| +1 :green_heart: | mvnsite | 2m 6s | | the patch passed |
| +1 :green_heart: | javadoc | 1m 23s | | the patch passed with JDK
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 |
| +1 :green_heart: | javadoc | 1m 54s | | the patch passed with JDK
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
| +1 :green_heart: | spotbugs | 5m 51s | | the patch passed |
| +1 :green_heart: | shadedclient | 16m 2s | | patch has no errors
when building and testing our client artifacts. |
|||| _ Other Tests _ |
| +1 :green_heart: | unit | 2m 22s | | hadoop-hdfs-client in the patch
passed. |
| -1 :x: | unit | 277m 10s |
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3247/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
| hadoop-hdfs in the patch passed. |
| +1 :green_heart: | asflicense | 0m 47s | | The patch does not
generate ASF License warnings. |
| | | 390m 17s | | |
| Reason | Tests |
|-------:|:------|
| Failed junit tests | hadoop.hdfs.TestDeadNodeDetection |
| | hadoop.hdfs.TestLeaseRecovery |
| | hadoop.hdfs.TestUnsetAndChangeDirectoryEcPolicy |
| | hadoop.hdfs.TestListFilesInDFS |
| | hadoop.hdfs.TestCrcCorruption |
| | hadoop.hdfs.TestViewDistributedFileSystem |
| | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
| | hadoop.hdfs.TestFileCreationDelete |
| | hadoop.hdfs.client.impl.TestBlockReaderLocal |
| | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
| | hadoop.hdfs.TestHFlush |
| | hadoop.hdfs.TestClientReportBadBlock |
| | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
| | hadoop.hdfs.TestDecommissionWithStriped |
| Subsystem | Report/Notes |
|----------:|:-------------|
| Docker | ClientAPI=1.41 ServerAPI=1.41 base:
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3247/3/artifact/out/Dockerfile
|
| GITHUB PR | https://github.com/apache/hadoop/pull/3247 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient spotbugs checkstyle codespell |
| uname | Linux 818d1f1400d7 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/bin/hadoop.sh |
| git revision | trunk / 6f1ad05cd9cd5738f527fd58573046adf1ca2f2a |
| Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
| Multi-JDK versions |
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04
/usr/lib/jvm/java-8-openjdk-amd64:Private
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
| Test Results |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3247/3/testReport/ |
| Max. process+thread count | 3006 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
| Console output |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3247/3/console |
| versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
| Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
This message was automatically generated.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 632302)
Time Spent: 1h 10m (was: 1h)
> All three replicas are lost due to not adding a new DataNode in time
> --------------------------------------------------------------------
>
> Key: HDFS-16146
> URL: https://issues.apache.org/jira/browse/HDFS-16146
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode, hdfs
> Reporter: Shuyan Zhang
> Assignee: Shuyan Zhang
> Priority: Major
> Labels: pull-request-available
> Time Spent: 1h 10m
> Remaining Estimate: 0h
>
> We have a three-replica file, and all replicas of a block are lost when the
> default datanode replacement strategy is used. It happened like this:
> 1. addBlock() applies for a new block and successfully connects three
> datanodes (dn1, dn2 and dn3) to build a pipeline;
> 2. Write data;
> 3. dn1 has an error and was kicked out. At this time, the remaining datanodes
> in the pipeline > 1, according to the replacement strategy, there is no need
> to add a new datanode;
> 4. After writing is completed, enter PIPELINE_CLOSE;
> 5. dn2 has an error and was kicked out. But because it is already in the
> close phase, addDatanode2ExistingPipeline() decides to hand over the task of
> transfering the replica to the NameNode. At this time, there is only one
> datanode left in the pipeline;
> 6. dn3 error, all replicas are lost.
> If we add a new datanode in step 5, we can avoid losing all replicas in this
> case. I think error in PIPELINE_CLOSE and error in DATA_STREAMING have the
> same risk of losing replicas, we should not skip adding a new datanode
> during PIPELINE_CLOSE.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]