[
https://issues.apache.org/jira/browse/HDFS-8812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14638352#comment-14638352
]
Akira AJISAKA commented on HDFS-8812:
-------------------------------------
I reproduced the error with AWS m3.xlarge instance but I couldn't with my
MacBook Air.
{code}
try {
byte[] buf = new byte[1024 * 1024];
peer.getOutputStream().write(buf);
Assert.fail("write should timeout");
} catch (SocketTimeoutException ste) {
{code}
It seems to me that {{peer.getOutputStream().write(buf)}} completed before
reaching the configured timeout (1 sec). I'm thinking smaller timeout and/or
bigger byte buffer will fix the problem.
> TestDistributedFileSystem#testDFSClientPeerWriteTimeout fails
> -------------------------------------------------------------
>
> Key: HDFS-8812
> URL: https://issues.apache.org/jira/browse/HDFS-8812
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: test
> Affects Versions: 2.8.0
> Reporter: Akira AJISAKA
>
> TestDistributedFileSystem#testDFSClientPeerWriteTimeout fails.
> {noformat}
> Running org.apache.hadoop.hdfs.TestDistributedFileSystem
> Tests run: 18, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 50.038 sec
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestDistributedFileSystem
> testDFSClientPeerWriteTimeout(org.apache.hadoop.hdfs.TestDistributedFileSystem)
> Time elapsed: 0.66 sec <<< FAILURE!
> java.lang.AssertionError: wrong exception:java.lang.AssertionError: write
> should timeout
> at org.junit.Assert.fail(Assert.java:88)
> at
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClientPeerWriteTimeout(TestDistributedFileSystem.java:1206)
> {noformat}
> See
> https://builds.apache.org/job/PreCommit-HDFS-Build/11783/testReport/org.apache.hadoop.hdfs/TestDistributedFileSystem/testDFSClientPeerWriteTimeout/
> and
> https://builds.apache.org/job/PreCommit-HDFS-Build/11786/testReport/org.apache.hadoop.hdfs/TestDistributedFileSystem/testDFSClientPeerWriteTimeout/
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)