[ 
https://issues.apache.org/jira/browse/HDFS-14027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664635#comment-16664635
 ] 

Hadoop QA commented on HDFS-14027:
----------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  3m 14s{color} 
| {color:red} hadoop-hdfs-project generated 2 new + 534 unchanged - 2 fixed = 
536 total (was 536) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
36s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 34s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}157m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14027 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12945675/HDFS-14027.02.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4af619180421 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 38a65e3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25367/artifact/out/diff-compile-javac-hadoop-hdfs-project.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25367/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25367/testReport/ |
| Max. process+thread count | 4340 (vs. ulimit of 10000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25367/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> DFSStripedOutputStream should implement both hsync methods
> ----------------------------------------------------------
>
>                 Key: HDFS-14027
>                 URL: https://issues.apache.org/jira/browse/HDFS-14027
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: erasure-coding
>    Affects Versions: 3.0.0
>            Reporter: Xiao Chen
>            Assignee: Xiao Chen
>            Priority: Critical
>         Attachments: HDFS-14027.01.patch, HDFS-14027.02.patch
>
>
> In an internal spark investigation, it appears that when 
> [EventLoggingListener|https://github.com/apache/spark/blob/7251be0c04f0380208e0197e559158a9e1400868/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala#L152-L155]
>  writes to an EC file, one may get exceptions reading, or get odd outputs. A 
> sample exception is
> {noformat}
> hdfs dfs -cat /user/spark/applicationHistory/application_1540333573846_0003 | 
> head -1
> 18/10/23 18:12:39 WARN impl.BlockReaderFactory: I/O error constructing remote 
> block reader.
> java.io.IOException: Got error, status=ERROR, status message opReadBlock 
> BP-1488936467-HOST_IP-1540333392519:blk_-9223372036854774960_1085 received 
> exception java.io.IOException:  Offset 0 and length 116161 don't match block 
> BP-1488936467-HOST_IP-1540333392519:blk_-9223372036854774960_1085 ( blockLen 
> 110296 ), for OP_READ_BLOCK, self=/HOST_IP:48610, remote=/HOST2_IP:20002, for 
> file /user/spark/applicationHistory/application_1540333573846_0003, for pool 
> BP-1488936467-HOST_IP-1540333392519 block -9223372036854774960_1085
>       at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134)
>       at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:110)
>       at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.checkSuccess(BlockReaderRemote.java:440)
>       at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.newBlockReader(BlockReaderRemote.java:408)
>       at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:848)
>       at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:744)
>       at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:379)
>       at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:644)
>       at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader(DFSStripedInputStream.java:264)
>       at org.apache.hadoop.hdfs.StripeReader.readChunk(StripeReader.java:299)
>       at org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:330)
>       at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:326)
>       at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:419)
>       at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:829)
>       at java.io.DataInputStream.read(DataInputStream.java:100)
>       at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:92)
>       at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:66)
>       at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:127)
>       at 
> org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:101)
>       at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:96)
>       at 
> org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367)
>       at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
>       at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304)
>       at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286)
>       at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270)
>       at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
>       at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
>       at org.apache.hadoop.fs.FsShell.run(FsShell.java:326)
>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>       at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> 18/10/23 18:12:39 WARN hdfs.DFSClient: Failed to connect to /HOST2_IP:20002 
> for blockBP-1488936467-HOST_IP-1540333392519:blk_-9223372036854774960_1085
> java.io.IOException: Got error, status=ERROR, status message opReadBlock 
> BP-1488936467-HOST_IP-1540333392519:blk_-9223372036854774960_1085 received 
> exception java.io.IOException:  Offset 0 and length 116161 don't match block 
> BP-1488936467-HOST_IP-1540333392519:blk_-9223372036854774960_1085 ( blockLen 
> 110296 ), for OP_READ_BLOCK, self=/HOST_IP:48610, remote=/HOST2_IP:20002, for 
> file /user/spark/applicationHistory/application_1540333573846_0003, for pool 
> BP-1488936467-HOST_IP-1540333392519 block -9223372036854774960_1085
>       at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134)
>       at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:110)
>       at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.checkSuccess(BlockReaderRemote.java:440)
>       at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.newBlockReader(BlockReaderRemote.java:408)
>       at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:848)
>       at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:744)
>       at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:379)
>       at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:644)
>       at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader(DFSStripedInputStream.java:264)
>       at org.apache.hadoop.hdfs.StripeReader.readChunk(StripeReader.java:299)
>       at org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:330)
>       at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:326)
>       at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:419)
>       at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:829)
>       at java.io.DataInputStream.read(DataInputStream.java:100)
>       at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:92)
>       at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:66)
>       at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:127)
>       at 
> org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:101)
>       at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:96)
>       at 
> org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367)
>       at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
>       at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304)
>       at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286)
>       at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270)
>       at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
>       at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
>       at org.apache.hadoop.fs.FsShell.run(FsShell.java:326)
>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>       at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> 18/10/23 18:12:39 WARN hdfs.DFSClient: 
> [DatanodeInfoWithStorage[HOST2_IP:20002,DS-f5bc0566-eeb0-43aa-84b9-551a3a6d01a6,DISK]]
>  are unavailable and all striping blocks on them are lost. IgnoredNodes = null
> {"Event":"SparkListenerLogStart","Spark Version":"2.4.0-cdh6.x-SNAPSHOT"}
> {noformat}
> Also, there are clearly {{fsync}} logs in NN for the file.
> Looking from code, the only way this can happen is through the {{hsync}} 
> overload on {{DFSStripedOutputStream}}. We should make that consistent with 
> the {{hsync}} without parameters. It seems this was simply missed from day0 
> implementation in HDFS-7889.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to