On 11/2/16 6:17 PM, Chris Hegarty wrote:
Amy,
On 2 Nov 2016, at 09:43, Amy Lu <amy...@oracle.com> wrote:
Please reviewthe patch for java/net/ipv6tests/UdpTest.java
bug: https://bugs.openjdk.java.net/browse/JDK-8143097
webrev: http://cr.openjdk.java.net/~amlu/8143097/webrev.00/
I think what you have will probably be ok, given the 50% tolerance, but should’t
the end time be 10_000 + 2_000 ( the delay + the socket timeout )?
Thanks for your review, Chris.
Let me try explain more.
The end time here is for s1 socket timeout:
s1.setSoTimeout(10000);
No matter how long the delay is in runAfter (which is for s.send(p)),
s1.receive works when < 10000 (the given timeout); otherwise it will
throw SocketTimeoutException.
Thanks,
Amy
-Chris.
This test fails intermittently in a test scenario for checking DatagramSocket
(with SO_TIMEOUT enabled) 'receive'works even after a delay (but within the
specified timeout):
120 static void test2 () throws Exception {
121 s1 = new DatagramSocket ();
......
151 s1.setSoTimeout(10000);
152 runAfter (2000, new Runnable () { <<<< --- run after the given time
(2000) has elapsed
153 public void run () {
......
156 s.send (p);
......
158 }
159 });
160 t1 = System.currentTimeMillis();
161 s1.receive (new DatagramPacket (new byte [128], 128)); <<<<
---receive should works here
162 checkTime (System.currentTimeMillis() - t1, 4000);
The final checkTime method is for checking the time got
(System.currentTimeMillis() - t1) is equal to (with 50% tolerance) the time
expected (4000). This assumption is not correct. Test should check that the
time got (System.currentTimeMillis() - t1) is between 2000 (the given delay)
and 10000 (the given timeout).