[
https://issues.apache.org/jira/browse/HDFS-4660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14543088#comment-14543088
]
Hadoop QA commented on HDFS-4660:
---------------------------------
\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch | 14m 35s | Pre-patch trunk compilation is
healthy. |
| {color:green}+1{color} | @author | 0m 0s | The patch does not contain any
@author tags. |
| {color:red}-1{color} | tests included | 0m 0s | The patch doesn't appear
to include any new or modified tests. Please justify why no new tests are
needed for this patch. Also please list what manual steps were performed to
verify this patch. |
| {color:green}+1{color} | javac | 7m 29s | There were no new javac warning
messages. |
| {color:green}+1{color} | javadoc | 9m 39s | There were no new javadoc
warning messages. |
| {color:green}+1{color} | release audit | 0m 22s | The applied patch does
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle | 2m 14s | The applied patch generated 4
new checkstyle issues (total was 62, now 63). |
| {color:red}-1{color} | whitespace | 0m 0s | The patch has 6 line(s) that
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install | 1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse | 0m 35s | The patch built with
eclipse:eclipse. |
| {color:green}+1{color} | findbugs | 3m 2s | The patch does not introduce
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native | 3m 13s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 168m 19s | Tests failed in hadoop-hdfs. |
| | | 211m 5s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.tools.TestHdfsConfigFields |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL |
http://issues.apache.org/jira/secure/attachment/12732698/HDFS-4660.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 281d47a |
| checkstyle |
https://builds.apache.org/job/PreCommit-HDFS-Build/10961/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
|
| whitespace |
https://builds.apache.org/job/PreCommit-HDFS-Build/10961/artifact/patchprocess/whitespace.txt
|
| hadoop-hdfs test log |
https://builds.apache.org/job/PreCommit-HDFS-Build/10961/artifact/patchprocess/testrun_hadoop-hdfs.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-HDFS-Build/10961/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/10961/console |
This message was automatically generated.
> Duplicated checksum on DN in a recovered pipeline
> -------------------------------------------------
>
> Key: HDFS-4660
> URL: https://issues.apache.org/jira/browse/HDFS-4660
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 3.0.0, 2.0.3-alpha
> Reporter: Peng Zhang
> Assignee: Kihwal Lee
> Priority: Critical
> Attachments: HDFS-4660.patch, HDFS-4660.patch
>
>
> pipeline DN1 DN2 DN3
> stop DN2
> pipeline added node DN4 located at 2nd position
> DN1 DN4 DN3
> recover RBW
> DN4 after recover rbw
> 2013-04-01 21:02:31,570 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Recover
> RBW replica
> BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_1004
> 2013-04-01 21:02:31,570 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
> Recovering ReplicaBeingWritten, blk_-9076133543772600337_1004, RBW
> getNumBytes() = 134144
> getBytesOnDisk() = 134144
> getVisibleLength()= 134144
> end at chunk (134144/512=262)
> DN3 after recover rbw
> 2013-04-01 21:02:31,575 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Recover
> RBW replica
> BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_10042013-04-01
> 21:02:31,575 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
> Recovering ReplicaBeingWritten, blk_-9076133543772600337_1004, RBW
> getNumBytes() = 134028
> getBytesOnDisk() = 134028
> getVisibleLength()= 134028
> client send packet after recover pipeline
> offset=133632 len=1008
> DN4 after flush
> 2013-04-01 21:02:31,779 DEBUG
> org.apache.hadoop.hdfs.server.datanode.DataNode: FlushOrsync, file
> offset:134640; meta offset:1063
> // meta end position should be floor(134640/512)*4 + 7 == 1059, but now it is
> 1063.
> DN3 after flush
> 2013-04-01 21:02:31,782 DEBUG
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
> BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_1005,
> type=LAST_IN_PIPELINE, downstreams=0:[]: enqueue Packet(seqno=219,
> lastPacketInBlock=false, offsetInBlock=134640,
> ackEnqueueNanoTime=8817026136871545)
> 2013-04-01 21:02:31,782 DEBUG
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Changing
> meta file offset of block
> BP-325305253-10.2.201.14-1364820083462:blk_-9076133543772600337_1005 from
> 1055 to 1051
> 2013-04-01 21:02:31,782 DEBUG
> org.apache.hadoop.hdfs.server.datanode.DataNode: FlushOrsync, file
> offset:134640; meta offset:1059
> After checking meta on DN4, I found checksum of chunk 262 is duplicated, but
> data not.
> Later after block was finalized, DN4's scanner detected bad block, and then
> reported it to NM. NM send a command to delete this block, and replicate this
> block from other DN in pipeline to satisfy duplication num.
> I think this is because in BlockReceiver it skips data bytes already written,
> but not skips checksum bytes already written. And function
> adjustCrcFilePosition is only used for last non-completed chunk, but
> not for this situation.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)