Re: ServerSocket.isBound() continue to return true after close() is called.

2019-04-12 Thread Norman Maurer
No worries.

Thanks a lot!


> On 12. Apr 2019, at 10:44, Chris Hegarty  wrote:
> 
> The following JIRA issue has been filed to track this:
>https://bugs.openjdk.java.net/browse/JDK-8222363 
> <https://bugs.openjdk.java.net/browse/JDK-8222363>
> 
> -Chris.
> 
> P.S. my bad! I missed this case when working on 6505016 ( in 2007! )
> 
>> On 11 Apr 2019, at 16:27, Norman Maurer > <mailto:norman.mau...@googlemail.com>> wrote:
>> 
>> Ok thanks… update to the java docs sounds good to me. I was just suprised by 
>> the behaviour :)
>> 
>>> On 11. Apr 2019, at 17:12, Michael McMahon >> <mailto:michael.x.mcma...@oracle.com>> wrote:
>>> 
>>> Norman
>>> 
>>> The specification on what happens to all socket types was updated many 
>>> years ago
>>> in bug id 6505016, but it looks like ServerSocket::isBound was missed from 
>>> that effort.
>>> I think we should probably update the spec to reflect current behavior and 
>>> be consistent
>>> with the change above. There will be some small spec updates to 
>>> ServerSocket coming
>>> which originated from the SocketImpl replacement work that Alan mentioned 
>>> recently
>>> and I think we can include this small change probably with one of those.
>>> 
>>> Michael.
>>> 
>>> On 11/04/2019, 13:40, Norman Maurer wrote:
>>>> 
>>>> Hi there,
>>>> 
>>>> While working on netty I just noticed that a ServerSocket will keep return 
>>>> true when isBound() is called even after it was closed. Is this by design? 
>>>> I was bit surprised by this honestly as after the socket is closed there 
>>>> is no way it will accept any more connections.
> 



Re: ServerSocket.isBound() continue to return true after close() is called.

2019-04-11 Thread Norman Maurer
Ok thanks… update to the java docs sounds good to me. I was just suprised by 
the behaviour :)

> On 11. Apr 2019, at 17:12, Michael McMahon  
> wrote:
> 
> Norman
> 
> The specification on what happens to all socket types was updated many years 
> ago
> in bug id 6505016, but it looks like ServerSocket::isBound was missed from 
> that effort.
> I think we should probably update the spec to reflect current behavior and be 
> consistent
> with the change above. There will be some small spec updates to ServerSocket 
> coming
> which originated from the SocketImpl replacement work that Alan mentioned 
> recently
> and I think we can include this small change probably with one of those.
> 
> Michael.
> 
> On 11/04/2019, 13:40, Norman Maurer wrote:
>> 
>> Hi there,
>> 
>> While working on netty I just noticed that a ServerSocket will keep return 
>> true when isBound() is called even after it was closed. Is this by design? I 
>> was bit surprised by this honestly as after the socket is closed there is no 
>> way it will accept any more connections.
>> 
>> If this is by design should we at least update the java docs ?
>> 
>> https://docs.oracle.com/en/java/javase/12/docs/api/java.base/java/net/ServerSocket.html#isBound()
>>  
>> <https://docs.oracle.com/en/java/javase/12/docs/api/java.base/java/net/ServerSocket.html#isBound%28%29>
>> 
>> In contrast for DatagramSocket it is directly called out:
>> 
>> https://docs.oracle.com/en/java/javase/12/docs/api/java.base/java/net/DatagramSocket.html#isBound()
>>  
>> <https://docs.oracle.com/en/java/javase/12/docs/api/java.base/java/net/DatagramSocket.html#isBound%28%29>
>> 
>> So I wonder if it was mainly done the same way for a ServerSocket to be more 
>> consistent here or if it is just an oversight.
>> 
>> This code shows the behaviour. 
>> 
>> ServerSocket socket = new ServerSocket(0);
>> 
>> socket.close();
>> if (socket.isBound()) {
>> throw new AssertionError();
>> }
>> 
>> 
>> 
>> Thanks,
>> Norman
>> 



ServerSocket.isBound() continue to return true after close() is called.

2019-04-11 Thread Norman Maurer
Hi there,

While working on netty I just noticed that a ServerSocket will keep return true 
when isBound() is called even after it was closed. Is this by design? I was bit 
surprised by this honestly as after the socket is closed there is no way it 
will accept any more connections.

If this is by design should we at least update the java docs ?

https://docs.oracle.com/en/java/javase/12/docs/api/java.base/java/net/ServerSocket.html#isBound()
 


In contrast for DatagramSocket it is directly called out:

https://docs.oracle.com/en/java/javase/12/docs/api/java.base/java/net/DatagramSocket.html#isBound()
 


So I wonder if it was mainly done the same way for a ServerSocket to be more 
consistent here or if it is just an oversight.

This code shows the behaviour. 

ServerSocket socket = new ServerSocket(0);

socket.close();
if (socket.isBound()) {
throw new AssertionError();
}



Thanks,
Norman



Re: Public API to get the search list that is used on the system

2018-09-27 Thread Norman Maurer
Forgot to ask… is it somehow possible to “subscribe” to an issue and get 
notified once there are some updates ?

Bye
Norman


> On 27. Sep 2018, at 12:18, Norman Maurer  wrote:
> 
> Thanks a lot.
> 
> That would provide exactly what we need in netty.
> 
> Bye
> Norman
> 
> 
>> On 27. Sep 2018, at 12:09, Chris Hegarty  wrote:
>> 
>> Norman,
>> 
>>> On 26 Sep 2018, at 12:45, Norman Maurer  
>>> wrote:
>>> 
>>> BTW I am happy to open an enhancement request for this with more details . 
>>> Just wanted to check in first here :)
>> 
>> I searched the bug database to ensure that this request had
>> not come up before, and found nothing. I took the liberty of
>> filling the following JIRA to track the enhancement request.
>> 
>> "Support retrieving the system's name service configuration”
>> https://bugs.openjdk.java.net/browse/JDK-8211216
>> 
>> If the above enhancement were available, then it would be
>> a suitable replacement for any code accessing sun.net.dns.-
>> ResolverConfiguration directly. I think that this would make
>> a good addition to the java.net package.
>> 
>> -Chris.
> 



Re: Public API to get the search list that is used on the system

2018-09-27 Thread Norman Maurer
Thanks a lot.

That would provide exactly what we need in netty.

Bye
Norman


> On 27. Sep 2018, at 12:09, Chris Hegarty  wrote:
> 
> Norman,
> 
>> On 26 Sep 2018, at 12:45, Norman Maurer  wrote:
>> 
>> BTW I am happy to open an enhancement request for this with more details . 
>> Just wanted to check in first here :)
> 
> I searched the bug database to ensure that this request had
> not come up before, and found nothing. I took the liberty of
> filling the following JIRA to track the enhancement request.
> 
> "Support retrieving the system's name service configuration”
> https://bugs.openjdk.java.net/browse/JDK-8211216
> 
> If the above enhancement were available, then it would be
> a suitable replacement for any code accessing sun.net.dns.-
> ResolverConfiguration directly. I think that this would make
> a good addition to the java.net package.
> 
> -Chris.



Re: Public API to get the search list that is used on the system

2018-09-26 Thread Norman Maurer
BTW I am happy to open an enhancement request for this with more details . Just 
wanted to check in first here :)

Just let me know if I should do or not.

Bye
Norman


> On 26. Sep 2018, at 13:15, Norman Maurer  wrote:
> 
> Yeah preferable a method for both but JNDI works for the nameservers for now.
> 
>> Am 26.09.2018 um 13:06 schrieb Chris Hegarty :
>> 
>> Hi Norman,
>> 
>> A non-blocking DNS resolver, good use-case.
>> 
>> Clearly the system configured name servers are of interest too ( I bring
>> it up since your specific request is about search domains )? Search
>> domains are only part of the problem.
>> 
>> I assume you can retrieve the name servers through the JDNI API, and are
>> relatively happy with that?
>> 
>> -Chris.
>> 
>>> On 26/09/18 11:42, Norman Maurer wrote:
>>> Yeah… In netty we an DNS resolver implementation that is completely 
>>> non-blocking and for this we need it to correctly resolve in some cases. 
>>> Blocking DNS resolution just doesn’t “cut it” when you use non-blocking IO 
>>> and also need to resolve a lot of stuff during connect etc.



Re: Public API to get the search list that is used on the system

2018-09-26 Thread Norman Maurer
Yeah preferable a method for both but JNDI works for the nameservers for now.

> Am 26.09.2018 um 13:06 schrieb Chris Hegarty :
> 
> Hi Norman,
> 
> A non-blocking DNS resolver, good use-case.
> 
> Clearly the system configured name servers are of interest too ( I bring
> it up since your specific request is about search domains )? Search
> domains are only part of the problem.
> 
> I assume you can retrieve the name servers through the JDNI API, and are
> relatively happy with that?
> 
> -Chris.
> 
>> On 26/09/18 11:42, Norman Maurer wrote:
>> Yeah… In netty we an DNS resolver implementation that is completely 
>> non-blocking and for this we need it to correctly resolve in some cases. 
>> Blocking DNS resolution just doesn’t “cut it” when you use non-blocking IO 
>> and also need to resolve a lot of stuff during connect etc.


Re: Public API to get the search list that is used on the system

2018-09-26 Thread Norman Maurer
Yeah… In netty we an DNS resolver implementation that is completely 
non-blocking and for this we need it to correctly resolve in some cases. 
Blocking DNS resolution just doesn’t “cut it” when you use non-blocking IO and 
also need to resolve a lot of stuff during connect etc.


Bye
Norman


> On 26. Sep 2018, at 12:40, Chris Hegarty  wrote:
> 
> Hi Norman,
> 
> As you correctly found, there is a private implementation that is used
> by JNDI. As far as I am aware, there are no plans to provide a Java SE
> API for retrieving the default search domain list. Is there a particular
> use-case that you have in mind that requires this?
> 
> -Chris.
> 
> On 26/09/18 10:06, Norman Maurer wrote:
>> Hi all,
>> I wonder if there is any plan to provide a public domain to receive the 
>> “search list”.
>> At the moment this is exposed via an internal API only:
>> sun.net <http://sun.net>.dns.ResolverConfiguration.searchlist()
>> I know I can use JNDI to get a list of dnsservers but I could not find 
>> anything for the searchlist:
>> https://docs.oracle.com/cd/A97688_16/generic.903/a97690/jndi.htm
>> Thanks
>> Norman



Public API to get the search list that is used on the system

2018-09-26 Thread Norman Maurer
Hi all,

I wonder if there is any plan to provide a public domain to receive the “search 
list”.

At the moment this is exposed via an internal API only:

sun.net .dns.ResolverConfiguration.searchlist()

I know I can use JNDI to get a list of dnsservers but I could not find anything 
for the searchlist:

https://docs.oracle.com/cd/A97688_16/generic.903/a97690/jndi.htm 


Thanks
Norman



Re: [PATCH] SOCK_CLOEXEC for opening sockets

2018-07-10 Thread Norman Maurer
+1 I think this makes a lot of sense


> On 10. Jul 2018, at 17:54, Alan Bateman  wrote:
> 
> On 10/07/2018 17:40, Martin Buchholz wrote:
>> I agree with this approach - it parallels the efforts made with O_CLOEXEC in 
>> past years.
>> 
>> According to 
>> https://www.freebsd.org/cgi/man.cgi?query=socket=2 
>> 
>> SOCK_CLOEXEC is also available on freebsd.
>> 
> This is something that doesn't come up too often, I assume because most 
> developers using ProcessBuilder/Process rather than invoking fork from native 
> code.
> 
> If we are going to tackle this issue then it will require changes in several 
> places, changing PlainSocketImpl.c is just one of several.
> 
> -Alan



Unable to use custom SSLEngine with default TrustManagerFactory after updating to ea20 (and later)

2018-07-10 Thread Norman Maurer
Hi all,

I just tried to run netty[1] testsuite with the latest jdk11 EA release (21) 
and saw some class-cast-exception with our custom SSLEngine implementation


Caused by: java.lang.ClassCastException: class 
io.netty.handler.ssl.OpenSslEngine cannot be cast to class 
sun.security.ssl.SSLEngineImpl (io.netty.handler.ssl.OpenSslEngine is in 
unnamed module of loader 'app'; sun.security.ssl.SSLEngineImpl is in module 
java.base of loader 'bootstrap')
at 
java.base/sun.security.ssl.SSLAlgorithmConstraints.(SSLAlgorithmConstraints.java:93)
at 
java.base/sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:270)
at 
java.base/sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:141)
at 
io.netty.handler.ssl.ReferenceCountedOpenSslClientContext$ExtendedTrustManagerVerifyCallback.verify(ReferenceCountedOpenSslClientContext.java:237)
at 
io.netty.handler.ssl.ReferenceCountedOpenSslContext$AbstractCertificateVerifier.verify(ReferenceCountedOpenSslContext.java:621)
... 27 more


This change seems to be related to:
http://hg.openjdk.java.net/jdk/jdk11/rev/68fa3d4026ea 


I think you miss an instanceof check here in SSLAlgorithmConstraints before try 
to cast to SSLEngineImpl, as otherwise it will be impossible to use custom 
implementations of SSLEngine (which we have in netty) with the default 
TrustManagerFactory.

Does this sound correct ? Should I open a bug-report ?

Bye
Norman





Re: RFR : 8205959 : Do not restart close if errno is EINTR

2018-07-02 Thread Norman Maurer
+1 retry a close on EINTR has most likely not the outcome you expect and may 
even close a wrong FD if the same FD is reused already (as even if EINTR is 
returned it may have closed the FD)

> Am 02.07.2018 um 10:17 schrieb Baesken, Matthias :
> 
> Hello  ,  there is a similar pattern (attempt to restart close in case of 
> EINTR)  in the coding as well   in  socket_md.c   :
> 
> src/jdk.jdwp.agent/unix/native/libdt_socket/socket_md.c-147-int rv;
> src/jdk.jdwp.agent/unix/native/libdt_socket/socket_md.c-148-do {
> src/jdk.jdwp.agent/unix/native/libdt_socket/socket_md.c-149-rv = 
> close(fd);
> src/jdk.jdwp.agent/unix/native/libdt_socket/socket_md.c:150:} while (rv 
> == -1 && errno == EINTR);
> src/jdk.jdwp.agent/unix/native/libdt_socket/socket_md.c-151-
> src/jdk.jdwp.agent/unix/native/libdt_socket/socket_md.c-152-return rv;
> src/jdk.jdwp.agent/unix/native/libdt_socket/socket_md.c-153-}
> 
> Do you think this needs adjustment   (on LINUX)  as well ?
> 
> Best regards, Matthias
> 
> 
>> Message: 2
>> Date: Thu, 28 Jun 2018 18:19:46 +0100
>> From: Alan Bateman 
>> To: David Lloyd , ivan.gerasi...@oracle.com
>> Cc: OpenJDK Network Dev list 
>> Subject: Re: RFR : 8205959 : Do not restart close if errno is EINTR
>> Message-ID: <3fd1496f-ab83-a2d5-0699-13c8b735d...@oracle.com>
>> Content-Type: text/plain; charset=utf-8; format=flowed
>> 
>>> On 28/06/2018 17:35, David Lloyd wrote:
>>> :
>>> Do you (or Alan) think that this might have accounted for real-world
>>> connection problems?
>>> 
>> In the file I/O area, with NFS I think, we had an issue a long time ago
>> where close was retried after EIO. That issue was fixed a long time ago
>> but it's one that comes to mind in this general area.
>> 
>> -Alan
>> 
> 


Re: 8203937: Not possible to read data from socket after write detects connection reset

2018-06-18 Thread Norman Maurer
Sorry for the late response but I was able to verify the fix and all looks good 
again. 

Thanks for the quick turn-around,
Norman


> On 6. Jun 2018, at 11:03, Chris Hegarty  wrote:
> 
> 
>> On 3 Jun 2018, at 12:07, Alan Bateman  wrote:
>> 
>> ...
>> The following is the webrev to remove the detection of connection reset in 
>> the socket writing good. We can think of this as a follow-up to JDK-8199329 
>> in that it removes more of the JDK 1.4 era code that special cased 
>> connection reset on Solaris to work around the eager reporting of network 
>> errors. With this change, it means that hitting the connection reset when 
>> writing will not interfere with any subsequent reading from the socket.
> 
> +1 This is my reading of the code and your changes.
> 
>> I've added a test to ensure that further changes in this area don't change 
>> the behavior on Linux and macOS. I've run the tests several hundred tests 
>> and haven't seen any intermittent failures (which is always a concern with 
>> tests like this).
>> 
>> http://cr.openjdk.java.net/~alanb/8203937/webrev/index.html
> 
> Looks good.
> 
> -Chris.
> 



Re: 8203937: Not possible to read data from socket after write detects connection reset

2018-06-04 Thread Norman Maurer
I will test this with Netty as well and report back

Bye
Norman

> Am 03.06.2018 um 13:07 schrieb Alan Bateman :
> 
> This is a follow-up to the "Problem with half-closure related to 
> connection-reset in Java 11" thread that we've been discussing here over the 
> last few days. As we discussed, you can't reliably rely on being able to read 
> bytes after the connection reset but it is a reminder that a subtle behavior 
> change (even in a completely unspecified area) can break existing tests or 
> code that may rely on the behavior on specific platforms.
> 
> The following is the webrev to remove the detection of connection reset in 
> the socket writing good. We can think of this as a follow-up to JDK-8199329 
> in that it removes more of the JDK 1.4 era code that special cased connection 
> reset on Solaris to work around the eager reporting of network errors. With 
> this change, it means that hitting the connection reset when writing will not 
> interfere with any subsequent reading from the socket. I've added a test to 
> ensure that further changes in this area don't change the behavior on Linux 
> and macOS. I've run the tests several hundred tests and haven't seen any 
> intermittent failures (which is always a concern with tests like this).
> 
> http://cr.openjdk.java.net/~alanb/8203937/webrev/index.html
> 
> -Alan


Re: Problem with half-closure related to connection-reset in Java 11

2018-06-01 Thread Norman Maurer

Hi Chris,


> On 1. Jun 2018, at 17:28, Chris Hegarty  wrote:
> 
> Hi Norman,
> 
> On 30/05/18 09:16, Norman Maurer wrote:
>> ...
>> I added a reproducer which not uses any netty classes to the PR that for now 
>> ignores test-failures caused by this when running on Java11+:
>> https://github.com/netty/netty/pull/7984#issuecomment-393071386
> 
> Would it be possible for you to post the reproducer, plain text is fine, on 
> the net-dev mailing list. That way we can be sure that any changes that are 
> made in this area resolve the particular issue you observe. I think we 
> understand the root cause, but it would be good to be sure.
> 
> -Chris.

Sure thing (its exactly the same code as in the pr):

import java.io.InputStream;
import java.io.OutputStream;
import java.net.ServerSocket;
import java.net.Socket;
import java.net.SocketException;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.atomic.AtomicReference;

public class Reproducer {

public static void main(String... args) throws Throwable {
CountDownLatch readLatch = new CountDownLatch(1);
CountDownLatch writeLatch = new CountDownLatch(1);
AtomicReference error = new AtomicReference<>();
byte[] bytes = new byte[512];

try (ServerSocket server = new ServerSocket(0)) {
Thread thread = new Thread(() -> {
try (Socket client = new Socket()){
client.connect(server.getLocalSocketAddress());

int readCount = 0;

try (InputStream in = client.getInputStream()) {
in.read();

readCount++;
readLatch.countDown();
writeLatch.await();


OutputStream out = client.getOutputStream();
try {
for (;;) {
// Just write until it fails.
out.write(1);
}
} catch (SocketException expected) {
}

try {
for (;;) {
// Read until it fails.
in.read();
readCount += 1;
}
} catch (SocketException expected) {
}
}

if (readCount != bytes.length) {
throw new AssertionError("Was not able to read 
remaining data after write error," +
" read " + readCount + "/" + bytes.length + 
".");
}
} catch (Throwable cause) {
error.set(cause);
}
});

thread.start();

Socket accepted = server.accept();

// Use SO_LINGER of 0 to ensure we trigger a connection-reset when 
close the accepted socket.
accepted.setSoLinger(true, 0);
accepted.getOutputStream().write(bytes);

// Wait until we read the first byte. After this there should still 
be 511 bytes left.
readLatch.await();

// Close the accepted channel and signal to the other thread that 
it should try to write now.
accepted.close();
writeLatch.countDown();

thread.join();

// Rethrow any error that was produced by the other thread.
Throwable cause = error.get();
if (cause != null) {
throw cause;
}
}
}
}

That said I think Alan has a reproducer for it which basically does exactly the 
same on the issue:

https://bugs.openjdk.java.net/browse/JDK-8203937?jql=text%20~%20%22connection%20reset%22
 
<https://bugs.openjdk.java.net/browse/JDK-8203937?jql=text%20~%20%22connection%20reset%22>

Thanks
Norman




Re: Problem with half-closure related to connection-reset in Java 11

2018-06-01 Thread Norman Maurer



> On 1. Jun 2018, at 14:50, Alan Bateman  wrote:
> 
> 
> 
> On 01/06/2018 13:19, Florian Weimer wrote:
>> 
>> But there is a race, even on Linux.  If the network is fast enough and you 
>> get an RST segment from the other end, kernel the receive buffer is 
>> discarded.
> Right although it can be a bit more predictable with loopback connections.
> 
>> 
>> So whatever you do on the JDK side, it's probably papering over an 
>> application bug that does not show up on some networks by sheer luck.
> All we're doing on the JDK code is removing JDK 1.4 era code that attempted 
> to deal with the Solaris specific behavior. Once we more of that then the JDK 
> behavior will just reflect whatever the platform does.

Thanks a lot!

> 
> -Alan

Norman



Re: Problem with half-closure related to connection-reset in Java 11

2018-06-01 Thread Norman Maurer



> On 1. Jun 2018, at 14:13, Alan Bateman  wrote:
> 
> On 01/06/2018 10:21, Florian Weimer wrote:
>> On 05/29/2018 04:26 PM, Norman Maurer wrote:
>>> Yes thats what I am saying… I think if a write fails due a connection-reset 
>>> a read should still be possible until we are told by the OS that we also 
>>> hit an error here.
>> 
>> Are there implementations where the kernel does *not* zap the read buffer 
>> when it receives an RST segment?  (Except perhaps Solaris, as mentioned 
>> further down the thread.)
> I can't say for sure whether the kernels actually drop the socket buffer or 
> not. For the scenario, the connection reset is reported when writing and on 
> both Linux and macOS you can read the previously received bytes in the socket 
> buffer. Solaris behavior is a bit different due to the way that it reports 
> network issues. Windows does not allow reading the bytes, it errors 
> immediately. It's surprising to hear about a test that depend on such 
> behavior but these are the types of things that come up when changing things 
> in this area.
> 
> -Alan


All I was saying when I reported this issue was that I think there is a 
behaviour change and we should preserve the old behaviour and just delegate to 
the OS, which will report an error anyway at some point. IMHO the wrapper 
around bsd sockets should be as “small” as possible. 

Maybe you are right and the test in question is not something that makes sense 
in general but we had users in the past that complained when we closed the 
reading side without draining the socket when an error on the writing side 
happened. This was when we did some changes to not do so in Netty (by 
ourselves) and added the test. 

Just to give you a bit of background, hope it helps.

Norman



Re: Problem with half-closure related to connection-reset in Java 11

2018-05-31 Thread Norman Maurer



> Am 31.05.2018 um 20:41 schrieb Alan Bateman :
> 
> 
> 
>> On 31/05/2018 18:08, Norman Maurer wrote:
>> :
>>> [1] https://bugs.openjdk.java.net/browse/JDK-8203937
>> Also let me know if you need anything else or want me to test something
>> 
> I saw your other mail where you posted a link to giithub. I should have 
> replied to ask you to attach the test so that it comes via OpenJDK 
> infrastructure.
> 
> In any case, I was able to create another test case to demonstrate the 
> behavior difference between JDK 10 and 11 and I've attached it to the bug. 
> It's as you described: the connection reset is detected when writing and a 
> subsequent reading fails eagerly when you expect to first consume any bytes 
> that may have been received before the connection was reset. The behavior 
> change is specific to Linux, it doesn't happen on macOS as the error is 
> reported as a pipe write error; On Windows it consistently reports an error 
> after the reset is detected so I doubt the test case ever passed there.
> 
> In summary, there is a behavior change, at least on Linux and I have a patch 
> to restore the old behavior, it just unspecified behavior and not something 
> that anything could really rely on.
> 
> -Alan
> 

Yeah we only ran the test on Linux and MacOS so no idea about Windows here . 
Thanks for taking care, I will check and keep track of the issue.

Bye
Norman

Re: Problem with half-closure related to connection-reset in Java 11

2018-05-31 Thread Norman Maurer



> Am 29.05.2018 um 22:49 schrieb Alan Bateman :
> 
>> On 29/05/2018 17:28, Norman Maurer wrote:
>> :
>> Oh sorry I thought thats the right system to use. I just followed the wiki 
>> page (I think).
> bugs.sun.com or bugreport.java.com is the right place. That routes the bugs 
> to the JIRA, just not automatically to the right project as often issues need 
> to be filtered or deleted. The bug tracking is JDK-8203937 [1].
> 
> 
>> :
>> Well it seems like it worked before on linux as well ? I mean with Java10 it 
>> seems to at least not fail on linux the same way as now.
>> 
> The only platform that I'm aware of that will report the reset when reading 
> and allow reading beyond the reset is Solaris and this is only because of the 
> way that it reports networking errors to applications. I think the behavior 
> change you observe is because the reset is reported when writing and this is 
> impacting the reading from the java.net.Socket. This is really odd behavior 
> that has been there since JDK 1.4, just not noticed because it would attempt 
> to read until it got the reset. This is all unspecified behavior so this is 
> why the testing with the changes in JDK 11 didn't show up anything. I agree 
> we should just eliminate this so that it doesn't impact the read.
> 
> Would you have cycles to create a small, and standalone test, that 
> demonstrates the issue? This could be something we turn into a regression 
> test to test behavior.
> 
> -Alan
> 
> [1] https://bugs.openjdk.java.net/browse/JDK-8203937

Also let me know if you need anything else or want me to test something

Bye
Norman

Re: Problem with half-closure related to connection-reset in Java 11

2018-05-30 Thread Norman Maurer


> On 30. May 2018, at 09:14, Norman Maurer  wrote:
> 
> 
> 
>> On 29. May 2018, at 22:49, Alan Bateman > <mailto:alan.bate...@oracle.com>> wrote:
>> 
>> On 29/05/2018 17:28, Norman Maurer wrote:
>>> :
>>> Oh sorry I thought thats the right system to use. I just followed the wiki 
>>> page (I think).
>> bugs.sun.com <http://bugs.sun.com/> or bugreport.java.com 
>> <http://bugreport.java.com/> is the right place. That routes the bugs to the 
>> JIRA, just not automatically to the right project as often issues need to be 
>> filtered or deleted. The bug tracking is JDK-8203937 [1].
>> 
>> 
>>> :
>>> Well it seems like it worked before on linux as well ? I mean with Java10 
>>> it seems to at least not fail on linux the same way as now.
>>> 
>> The only platform that I'm aware of that will report the reset when reading 
>> and allow reading beyond the reset is Solaris and this is only because of 
>> the way that it reports networking errors to applications. I think the 
>> behavior change you observe is because the reset is reported when writing 
>> and this is impacting the reading from the java.net.Socket. This is really 
>> odd behavior that has been there since JDK 1.4, just not noticed because it 
>> would attempt to read until it got the reset. This is all unspecified 
>> behavior so this is why the testing with the changes in JDK 11 didn't show 
>> up anything. I agree we should just eliminate this so that it doesn't impact 
>> the read.
> 
> Yes exactly… I am only talking about the scenario here of have the write 
> error directly affect the read path without even trying to read from the 
> underlying fd.
> 
>> 
>> Would you have cycles to create a small, and standalone test, that 
>> demonstrates the issue? This could be something we turn into a regression 
>> test to test behavior.
> 
> Let me see..

I added a reproducer which not uses any netty classes to the PR that for now 
ignores test-failures caused by this when running on Java11+:

https://github.com/netty/netty/pull/7984#issuecomment-393071386 
<https://github.com/netty/netty/pull/7984#issuecomment-393071386>

Hopefully this is useful to you.

This one fails when running on Linux with Java11 but pass with Java10 and 
earlier.


> 
> 
>> 
>> -Alan
>> 
>> [1] https://bugs.openjdk.java.net/browse/JDK-8203937 
>> <https://bugs.openjdk.java.net/browse/JDK-8203937>
> 
> Bye
> Norman

Bye
Norman



Re: Problem with half-closure related to connection-reset in Java 11

2018-05-30 Thread Norman Maurer


> On 29. May 2018, at 22:49, Alan Bateman  wrote:
> 
> On 29/05/2018 17:28, Norman Maurer wrote:
>> :
>> Oh sorry I thought thats the right system to use. I just followed the wiki 
>> page (I think).
> bugs.sun.com or bugreport.java.com is the right place. That routes the bugs 
> to the JIRA, just not automatically to the right project as often issues need 
> to be filtered or deleted. The bug tracking is JDK-8203937 [1].
> 
> 
>> :
>> Well it seems like it worked before on linux as well ? I mean with Java10 it 
>> seems to at least not fail on linux the same way as now.
>> 
> The only platform that I'm aware of that will report the reset when reading 
> and allow reading beyond the reset is Solaris and this is only because of the 
> way that it reports networking errors to applications. I think the behavior 
> change you observe is because the reset is reported when writing and this is 
> impacting the reading from the java.net.Socket. This is really odd behavior 
> that has been there since JDK 1.4, just not noticed because it would attempt 
> to read until it got the reset. This is all unspecified behavior so this is 
> why the testing with the changes in JDK 11 didn't show up anything. I agree 
> we should just eliminate this so that it doesn't impact the read.

Yes exactly… I am only talking about the scenario here of have the write error 
directly affect the read path without even trying to read from the underlying 
fd.

> 
> Would you have cycles to create a small, and standalone test, that 
> demonstrates the issue? This could be something we turn into a regression 
> test to test behavior.

Let me see..


> 
> -Alan
> 
> [1] https://bugs.openjdk.java.net/browse/JDK-8203937 
> <https://bugs.openjdk.java.net/browse/JDK-8203937>

Bye
Norman




Re: Problem with half-closure related to connection-reset in Java 11

2018-05-29 Thread Norman Maurer



> On 29. May 2018, at 18:26, Alan Bateman  wrote:
> 
> On 29/05/2018 15:26, Norman Maurer wrote:
>> :
>> 
>> 
>> Yes thats what I am saying… I think if a write fails due a connection-reset 
>> a read should still be possible until we are told by the OS that we also hit 
>> an error here. Honestly I think this scenario can happen quite often in 
>> reality where some software writes while draining data from the socket in 
>> chunks. With Java 11 this may lead to the situation where the user may never 
>> see the data even when its waiting on the socket to be read which I think is 
>> weird. What kind of problems this may cause in different programs is hard to 
>> know, but its definitely something that surprised me. Even more after I 
>> started to debug and could see the packets via tcpdump etc.
>> 
>> Also as a side-note when using SocketChannel this works perfectly fine, as 
>> before. I will open a bug with all the informations in this email as 
>> requested. I just wanted to make sure first that my observations are correct 
>> and wanted to provide as much details as possible ( + making a strong 
>> argument to why I think this is a regression).
>> 
> I see you've created an issue to the bugs.sun.com system - thanks! I'll move 
> that to the JDK project so that it's accessible via bugs.openjdk.java.net.

Oh sorry I thought thats the right system to use. I just followed the wiki page 
(I think). 
> 
> You are right that there is subtle behavior change, something that wasn't 
> really intended. The code to manage reading beyond connection resets has 
> always been problematic and only ever worked on Solaris. The SocketChannel 
> implementation has never attempt to do this, it just reflects the platform 
> behavior when reading after a reset.

Well it seems like it worked before on linux as well ? I mean with Java10 it 
seems to at least not fail on linux the same way as now.


> 
> -Alan.


Thanks
Norman



Re: Problem with half-closure related to connection-reset in Java 11

2018-05-29 Thread Norman Maurer



> On 29. May 2018, at 16:19, Alan Bateman  wrote:
> 
> On 29/05/2018 14:52, Norman Maurer wrote:
>> Hi all,
>> 
>> After trying to run our testsuite in Netty [1] with Java11+ea15 I noticed we 
>> have one failing test that seems to be related to:
>> 
>> https://bugs.openjdk.java.net/browse/JDK-8199329
>> http://hg.openjdk.java.net/jdk/jdk/rev/92cca24c8807
>> 
>> I think the change here is not 100 % correct as it basically disallow 
>> draining any remaining bytes from the socket if a write causes a connection 
>> reset. This should be completely safe to do. At the moment if a write is 
>> causing a connection-reset you basically loose all the pending bytes that 
>> are sitting on the socket and are waiting to be read.
>> 
>> This happens because SocketOutputStream.write(…) may call 
>> AbstractPlainSocketImpl.setConnectionReset(…). Once this method is called 
>> any read(…) call will just throw a SocketException without even attempt to 
>> read any remaining data.
>> 
>> This was not the case with earlier Java versions, and I would argue its a 
>> bug.
>> 
>> Let me know what you think and please ask if you have any more questions,
> Reading beyond connection reset has always been problematic and never 
> guaranteed to work, esp. on the main stream platforms. I think you are 
> arguing that hitting connection reset when writing shouldn't impact another 
> thread reading. That probably make sense although it's hard to relate to 
> something that depends on such behavior. Can you submit a bug to track this 
> so that we at least track the behavior change?
> 
> -Alan


Yes thats what I am saying… I think if a write fails due a connection-reset a 
read should still be possible until we are told by the OS that we also hit an 
error here. Honestly I think this scenario can happen quite often in reality 
where some software writes while draining data from the socket in chunks. With 
Java 11 this may lead to the situation where the user may never see the data 
even when its waiting on the socket to be read which I think is weird. What 
kind of problems this may cause in different programs is hard to know, but its 
definitely something that surprised me. Even more after I started to debug and 
could see the packets via tcpdump etc. 

Also as a side-note when using SocketChannel this works perfectly fine, as 
before. I will open a bug with all the informations in this email as requested. 
I just wanted to make sure first that my observations are correct and wanted to 
provide as much details as possible ( + making a strong argument to why I think 
this is a regression).


Bye
Norman



Re: Behaviour of SocketChannelImpl.close() in Java11 (ea+12)

2018-05-11 Thread Norman Maurer


> On 11. May 2018, at 21:34, Alan Bateman <alan.bate...@oracle.com> wrote:
> 
> (cc'ing nio-dev as as this is asking about SocketChannel).
> 
> On 11/05/2018 19:10, Norman Maurer wrote:
>> Hi all,
>> 
>> I recently started to test Netty [1] with Java11 and found that we have two 
>> tests that are currently failing due some changes in Java 11 compared to 
>> earlier versions.
>> 
>> I wanted to get your thoughts on the behaviour changes:
>> 
>> 1) SocketChannelImpl.close() will trigger shutdown(…,SHUT_WR) if the channel 
>> is connected before do the actual close(…).
>> 
>> This is different to earlier version where it was just closed via close(…). 
>> We noticed this because we have a unit test that basically set SO_LINGER 0 
>> and verifies that the remote peer sees a ECONNRESET when channel is closed. 
>> This is not the case here anymore as the shutdown will cause an EOF.
>> I wonder depending on the connection reset is just plain wrong from our side 
>> as its an implementation detail, but at least it was super surprising to me 
>> that a shutdown(…) was called during the close operation. Especially as 
>> shutdownOutput() is exposed directly as well.
>> 
>> 
>> 2) SocketChannelImpl.close() will not directly close the fd but add it to a 
>> queue that will be processed by the Selector.
>> 
>> Again this is different to earlier versions and had the effect that one test 
>> failed that expected that the fd is really closed when close() returns.
>> 
> If I read this correctly, #1 and #2 are asking about closing a SocketChannel 
> that is registered with a Selector. If registered with a Selector then the 
> channel must be configured non-blocking.

You are reading it correctly …


> 
> I'll start with #2 as closing a SocketChannel registered with a Selector has 
> always delayed releasing the file descriptor until it has been flushed from 
> all Selectors. If the Netty tests are monitoring the file descriptor count 
> (maybe with the management API?) then you shouldn't see a difference. If #2 
> is about whether the peer sees a graceful close or a TCP reset then the 
> behavior should be the same as older releases too, except when the 
> linger-on-close socket option is enabled, which leads to your question #1.

I should have also mentioned that this test uses SO_LINGER 0 and so is also 
related to 1) as well a bit but not 100 %. The problem basically is that before 
after we called SocketChannel.close() the write on the peer SocketChannel 
always failed after the close() call returned which seems to not be the case 
anymore.

> 
> On #1, then one initial thing to point out is that SO_LINGER is only 
> specified for sockets configured in blocking mode. From the javadoc:
> 
> "This socket option is intended for use with sockets that are configured in 
> blocking mode only. The behavior of the close method when this option is 
> enabled on a non-blocking socket is not defined.”

Interesting enough I never noticed this javadoc and just assumed it would work 
the same way as when I just do it via C, which also works for non-blocking 
sockets (at least with SO_LINGER 0).

> 
> You are correct that there is a behavior difference in JDK 11 when enabling 
> this socket option on a SocketChannel configured non-blocking and then 
> closing it when it is registered with one or more Selector. That behavior 
> difference arises because close in JDK 10 (and older) always "pre-closed" 
> (essentially a dup2) the file descriptor whereas JDK 11 does not do this when 
> the channel is configured non-blocking. The two phase close trick is needed 
> for channels in blocking mode, not so for channels configured non-blocking 
> where it has always been very problematic to switch the file descriptor 
> whilst registered with a Selector.
> 
> As regards the half close / shutdown when registered with a Selector then 
> this is so that the peer detects the connection shutdown. The peer otherwise 
> not observe the shutdown until the channel is flushed from the Selector.
> 
> I'm in two minds as to whether we should do anything to try to restore "not 
> defined" behavior. We could potentially skip the shutdown when the 
> linger-on-close socket option is enabled. That would at least allow tests 
> that exercise TCP resets to continue to work, assuming the channel is flushed 
> promptly from all Selectors that it is registered with.

As the java docs clearly state its “undefined” for non-blocking (which I never 
noticed)  I am completely happy either way. I was just surprised about the 
change in behaviour and wanted to bring it up as others may also be surprised.


> 
> -Alan

Thanks
Norman




Re: Behaviour of SocketChannelImpl.close() in Java11 (ea+12)

2018-05-11 Thread Norman Maurer
Sorry I just noticed this would better be asked on nio.dev. Will ask there.

Bye
Norman


> On 11. May 2018, at 20:10, Norman Maurer <norman.mau...@googlemail.com> wrote:
> 
> Hi all,
> 
> I recently started to test Netty [1] with Java11 and found that we have two 
> tests that are currently failing due some changes in Java 11 compared to 
> earlier versions.
> 
> I wanted to get your thoughts on the behaviour changes:
> 
> 1) SocketChannelImpl.close() will trigger shutdown(…,SHUT_WR) if the channel 
> is connected before do the actual close(…).
> 
> This is different to earlier version where it was just closed via close(…). 
> We noticed this because we have a unit test that basically set SO_LINGER 0 
> and verifies that the remote peer sees a ECONNRESET when channel is closed. 
> This is not the case here anymore as the shutdown will cause an EOF. 
> I wonder depending on the connection reset is just plain wrong from our side 
> as its an implementation detail, but at least it was super surprising to me 
> that a shutdown(…) was called during the close operation. Especially as 
> shutdownOutput() is exposed directly as well.
> 
> 
> 2) SocketChannelImpl.close() will not directly close the fd but add it to a 
> queue that will be processed by the Selector.
> 
> Again this is different to earlier versions and had the effect that one test 
> failed that expected that the fd is really closed when close() returns. 
> 
> 
> I worked around these differences via [2] but I wanted to ask if this is 
> expected.
> 
> 
> Java version:
> 
> java version "11-ea" 2018-09-25
> Java(TM) SE Runtime Environment 18.9 (build 11-ea+12)
> Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11-ea+12, mixed mode)
> 
> 
> [1] http://netty.io <http://netty.io/>
> [2] https://github.com/netty/netty/pull/7926 
> <https://github.com/netty/netty/pull/7926>
> 
> 



Re: JEP 321: HTTP Client (Standard)

2017-12-04 Thread Norman Maurer
Well put David,

I couldn’t agree more  I would even go this far and say this is not 
something I would include in the platform itself at all.

Bye
Norman


> Am 04.12.2017 um 19:41 schrieb David Lloyd :
> 
>> On Mon, Dec 4, 2017 at 10:17 AM,   wrote:
>> New JEP Candidate: http://openjdk.java.net/jeps/321
> 
> I have concerns.
> 
> This will be the first public, NIO-based, asynchronous/non-blocking
> network protocol API introduced into the JDK proper, _ever_.
> 
> First, I want to note that the API seems to bear no resemblance
> whatsoever with the asynchronous NIO.2 API.  Now I'm no fan of that
> API as it stands for a couple of reasons, but nevertheless it calls
> into question either the validity of the HTTP API as it stands, or the
> scope of reusability of the NIO.2 API.
> 
> With that items put aside: there are a wide variety of mature,
> non-blocking network protocol implementations out there, with
> literally thousands of years of experience distributed amongst their
> authors, maintainers, supporters, and communities, none of which were
> used as a model.  There was, as far as I can see, no kind of study of
> existing non-blocking approaches in Java, and their strengths and
> weaknesses; there was no round table of engineers with experience in
> this field, talking about what works and what doesn't work.
> 
> Some new, seemingly general-purpose, concepts are introduced by the
> code base, for example: ByteBufferReference, and ByteBufferPool.  Are
> these strategies that should be promoted to NIO proper?  If not, then
> are they _really_ right for this particular use case, particularly if,
> for some reason, a _second_ non-blocking network protocol API might be
> introduced some day, probably duplicating these concepts?
> 
> Making this thing be the first real platform NIO-based asynchronous
> network protocol API, yet being completely removed from the previous
> NIO.2 asynchronous APIs and any other existing stable, mature API,
> should be done very carefully and deliberately, and perhaps most
> importantly, incrementally: first, establish a non-blocking byte
> stream API that makes sense generally, and bring that into NIO
> (NIO.3?); then, perhaps, enhancements to byte buffers to better
> support efficient pooling.  By the time that point is reached, it is
> hopefully rapidly becoming obvious that this is not something that
> should be taken lightly.
> 
> I believe that most third-party implementations are taken less lightly
> than this seems to have been.  I and my team have been developing an
> asynchronous/non-blocking NIO library for ten years now, and while I'm
> proud of our work and the discoveries we've made, I am realistic about
> the fact that it's still pretty far from as good as it could be (not
> in the least part due to existing platform limitations), certainly far
> from something I'd say "hey let's standardize this as is".  I think
> that to standardize something of this type that was just written over
> the past 18-odd months reflects, to put it kindly, some pretty
> incredible confidence that I wish I shared.
> 
> Speaking *solely* in the interests of platform quality and integrity,
> I think that before _any_ high-level non-blocking/asynchronous
> protocol API is ever introduced into the platform, it would be an
> incredible waste to not have some kind of design consultation with
> other industry experts.  Now I'm not suggesting that a JDK API would
> have to be _agreeable_ to every expert, as we all know that is
> basically impossible; but at the very minimum, I am very confident
> that we can tell you what _doesn't_ work and the pitfalls we've found
> along the way, as well as what each of us would consider to be an
> ideal API, and that is information that has incredible value.
> 
> Talking about introducing the first-ever non-blocking protocol API
> into the platform, at _this_ stage, seems premature and needlessly
> risky.  I would suggest that maybe it's best for the API to stick to
> blocking I/O, at least for now.  Or else, take it outside of the
> platform, and let it mature in the wild where it can evolve without an
> overbearing concern for compatibility for a decade or so (no, I'm not
> kidding).  As long as this thing lives in the JDK, but isn't
> standardized, it's probably not going to be used heavily enough to
> really feel out its weak points.  And once it's in, its ability to
> evolve is severely hampered by compatibility constraints.
> 
> I feel like it is impossible to over-emphasize the difficulty of the
> problem of non-blocking I/O when it comes to interactions with user
> programs.  Though the fruits of such an effort are probably small in
> terms of API surface, the complexity is hard: hard enough that it is,
> in my mind, a project of a larger scale, maybe JSR scale.  And the
> benefit is potentially large: large enough that it could change the
> landscape of other specifications and network 

Re: Expose DNS Servers via public API

2017-03-30 Thread Norman Maurer
Nice!


This works :)
Norman
> On 30. Mar 2017, at 17:27, Alan Bateman <alan.bate...@oracle.com> wrote:
> 
> On 30/03/2017 15:59, Norman Maurer wrote:
> 
>> Thats why I tried to kick-off the topic :) Thanks for the quick reply btw…. 
>> So is this list the right place for this discussion ?
>> 
> I just checked the docs for the JNDI-DNS provider [1] and it documents that 
> the provider updates the java.naming.provider.url property with the 
> configuration when it isn't initially specified. So I think this will do what 
> you want:
> 
> Hashtable<String, String> env = new Hashtable<>();
> env.put(Context.INITIAL_CONTEXT_FACTORY, 
> "com.sun.jndi.dns.DnsContextFactory");
> env.put("java.naming.provider.url", "dns://");
> DirContext ctx = new InitialDirContext(env);
> String dnsUrls = (String) 
> ctx.getEnvironment().get("java.naming.provider.url"));
> 
> -Alan
> 
> [1] http://docs.oracle.com/javase/8/docs/technotes/guides/jndi/jndi-dns.html



Re: Expose DNS Servers via public API

2017-03-30 Thread Norman Maurer

> On 30. Mar 2017, at 16:58, Alan Bateman <alan.bate...@oracle.com> wrote:
> 
> On 30/03/2017 15:36, Norman Maurer wrote:
>> Hi there,
>> 
>> I am not sure if this is the correct list for the question but as relate to 
>> network I will just try. If its the wrong list please tell me which one 
>> would be better fitted.
>> 
>> Is there reason why not expose the DNS Servers configured on a system. These 
>> are exposed in sun.net.dns.ResolverConfiguration but not exposed in any 
>> public way. Be able to access these would help people that want to do a 
>> custom dns lookup (for example non-blocking) like what we do in netty.
>> If this is something that would be considered I can come up with a patch for 
>> it.
>> 
> One could imagine introducing an API in java.net that exposes more of the 
> networking configuration but I think would require debate as to whether it's 
> the right thing to do or not (esp. as the existing networking APIs are 
> abstracted from whether resolution is done via DNS, host files, ..).
> 
> An alternative might to see if it is exposed by the JNDI-DNS provider (the 
> primary user of ResolverConfiguration and the reason it was created). It 
> might already be exposed in the environment.
> 
> -Alan

Thats why I tried to kick-off the topic :) Thanks for the quick reply btw…. So 
is this list the right place for this discussion ?



Expose DNS Servers via public API

2017-03-30 Thread Norman Maurer
Hi there,

I am not sure if this is the correct list for the question but as relate to 
network I will just try. If its the wrong list please tell me which one would 
be better fitted.

Is there reason why not expose the DNS Servers configured on a system. These 
are exposed in sun.net.dns.ResolverConfiguration but not exposed in any public 
way. Be able to access these would help people that want to do a custom dns 
lookup (for example non-blocking) like what we do in netty.
If this is something that would be considered I can come up with a patch for it.

Bye,
Norman
 



Re: Special exception for EMFILE / ENFILE when using sockets.

2016-12-07 Thread Norman Maurer
If we go this route we should also reconsider:

https://bugs.openjdk.java.net/browse/JDK-8167161 
<https://bugs.openjdk.java.net/browse/JDK-8167161>

Still not sure what solution I would prefer tho.


> On 6 Dec 2016, at 12:14, Langer, Christoph <christoph.lan...@sap.com> wrote:
> 
> Hi,
>  
> I would also support if IOException could be enriched to expose the native 
> error code. However, the user then would need to evaluate this code in a 
> platform specific manner. Maybe there could be some class/interface which 
> would help to translate platform specific error codes to common constants for 
> common error types.
>  
> Best regards
> Christoph
>   <>
> From: net-dev [mailto:net-dev-boun...@openjdk.java.net] On Behalf Of Bernd 
> Eckenfels
> Sent: Montag, 5. Dezember 2016 20:09
> To: net-dev@openjdk.java.net
> Subject: Re: Special exception for EMFILE / ENFILE when using sockets.
>  
> Hello,
>  
> I know it is a radical idea, but what about exposing errno value in an 
> IOException.
>  
>  I do think that the new ConnectionRefused subtype is helpful, but for each 
> seldomly occuring error case a dedicated exception is more work than a one 
> time mapping of native errormcodes to fields.
> 
> Gruss
> Bernd
> -- 
> http://bernd.eckenfels.net <http://bernd.eckenfels.net/>
>  
> 
> 
> 
> On Mon, Dec 5, 2016 at 7:28 PM +0100, "Norman Maurer" 
> <norman.mau...@googlemail.com <mailto:norman.mau...@googlemail.com>> wrote:
> 
>  
>  
> > Am 05.12.2016 um 18:48 schrieb David M. Lloyd :
> > 
> >> On 12/05/2016 06:29 AM, Norman Maurer wrote:
> >> Hi all,
> >> 
> >> I wonder if it would be possible to add a new public exception time for 
> >> the situation of an SocketChannel.accept(…) or SocketChannel.open(…)  (and 
> >> the same for ServerSocket / Socket) failing because of too many open files.
> >> The reason is because especially when acting as a server such an exception 
> >> may be something you can easily recover from. At there is basically no way 
> >> to detect if this was the cause of an IOException or not.
> >> 
> >> On unix / linux this are the errno values:
> >> 
> >> [EMFILE]   The per-process descriptor table is full.
> >> [ENFILE]   The system file table is full.
> >> 
> >> For netty we would love to be able to know if this was the case of the 
> >> problem and if so just stop accepting for a period of time to help the 
> >> system to recover.
> >> 
> >> What others think about this ?
> > 
> > I like the idea, but maybe it should be a general IOException since this 
> > same error can happen on file open, selector creation (sometimes), pipe 
> > creation, etc.
> > 
> > -- 
> > - DML
>  
> Sure that would work for me as well :)
>  
> Bye,
> Norman



Re: Special exception for EMFILE / ENFILE when using sockets.

2016-12-06 Thread Norman Maurer
While the idea is good I think this will not “fly” because for windows winsock 
is used in the JDK. Winsock does not use the same error constant as bsd sockets 
API (with errno).
In windows we would need to expose “WSAGetLastError()” values and in 
unix/linux/bsd “errno” values. Sure we could try to map for "WSAGetLastError()" 
values to unix/linux/bsd “errno” values but I am not sure this is possible in 
all cases.

That said we do exactly what you suggest edin netty with our own native 
transport that currently only supports linux [1].

[1] 
https://github.com/netty/netty/blob/4.1/transport-native-epoll/src/main/java/io/netty/channel/unix/Errors.java#L73
 
<https://github.com/netty/netty/blob/4.1/transport-native-epoll/src/main/java/io/netty/channel/unix/Errors.java#L73>
 .



> On 5 Dec 2016, at 20:09, Bernd Eckenfels <e...@zusammenkunft.net> wrote:
> 
> Hello,
> 
> I know it is a radical idea, but what about exposing errno value in an 
> IOException.
> 
>  I do think that the new ConnectionRefused subtype is helpful, but for each 
> seldomly occuring error case a dedicated exception is more work than a one 
> time mapping of native errormcodes to fields.
> 
> Gruss
> Bernd
> -- 
> http://bernd.eckenfels.net <http://bernd.eckenfels.net/>
> 
> 
> 
> On Mon, Dec 5, 2016 at 7:28 PM +0100, "Norman Maurer" 
> <norman.mau...@googlemail.com <mailto:norman.mau...@googlemail.com>> wrote:
> 
> 
> > Am 05.12.2016 um 18:48 schrieb David M. Lloyd :
> > 
> >> On 12/05/2016 06:29 AM, Norman Maurer wrote:
> >> Hi all,
> >> 
> >> I wonder if it would be possible to add a new public exception time for 
> >> the situation of an SocketChannel.accept(…) or SocketChannel.open(…)  (and 
> >> the same for ServerSocket / Socket) failing because of too many open files.
> >> The reason is because especially when acting as a server such an exception 
> >> may be something you can easily recover from. At there is basically no way 
> >> to detect if this was the cause of an IOException or not.
> >> 
> >> On unix / linux this are the errno values:
> >> 
> >> [EMFILE]   The per-process descriptor table is full.
> >> [ENFILE]   The system file table is full.
> >> 
> >> For netty we would love to be able to know if this was the case of the 
> >> problem and if so just stop accepting for a period of time to help the 
> >> system to recover.
> >> 
> >> What others think about this ?
> > 
> > I like the idea, but maybe it should be a general IOException since this 
> > same error can happen on file open, selector creation (sometimes), pipe 
> > creation, etc.
> > 
> > -- 
> > - DML
> 
> Sure that would work for me as well :)
> 
> Bye,
> Norman



Re: Special exception for EMFILE / ENFILE when using sockets.

2016-12-05 Thread Norman Maurer


> Am 05.12.2016 um 18:48 schrieb David M. Lloyd <david.ll...@redhat.com>:
> 
>> On 12/05/2016 06:29 AM, Norman Maurer wrote:
>> Hi all,
>> 
>> I wonder if it would be possible to add a new public exception time for the 
>> situation of an SocketChannel.accept(…) or SocketChannel.open(…)  (and the 
>> same for ServerSocket / Socket) failing because of too many open files.
>> The reason is because especially when acting as a server such an exception 
>> may be something you can easily recover from. At there is basically no way 
>> to detect if this was the cause of an IOException or not.
>> 
>> On unix / linux this are the errno values:
>> 
>> [EMFILE]   The per-process descriptor table is full.
>> [ENFILE]   The system file table is full.
>> 
>> For netty we would love to be able to know if this was the case of the 
>> problem and if so just stop accepting for a period of time to help the 
>> system to recover.
>> 
>> What others think about this ?
> 
> I like the idea, but maybe it should be a general IOException since this same 
> error can happen on file open, selector creation (sometimes), pipe creation, 
> etc.
> 
> -- 
> - DML

Sure that would work for me as well :)

Bye,
Norman

Special exception for EMFILE / ENFILE when using sockets.

2016-12-05 Thread Norman Maurer
Hi all,

I wonder if it would be possible to add a new public exception time for the 
situation of an SocketChannel.accept(…) or SocketChannel.open(…)  (and the same 
for ServerSocket / Socket) failing because of too many open files. 
The reason is because especially when acting as a server such an exception may 
be something you can easily recover from. At there is basically no way to 
detect if this was the cause of an IOException or not.

On unix / linux this are the errno values:

[EMFILE]   The per-process descriptor table is full.
[ENFILE]   The system file table is full.

For netty we would love to be able to know if this was the case of the problem 
and if so just stop accepting for a period of time to help the system to 
recover.

What others think about this ?

Thanks,
Norman







Re: Introduce IOException subclass for ECONNRESET

2016-12-02 Thread Norman Maurer
As I proposed this change let me work on this and propose a patch.

Thanks for all the replies.

Stay tuned,
Norman

> On 5 Oct 2016, at 10:58, Chris Hegarty <chris.hega...@oracle.com> wrote:
> 
> I filed the following issue to track this:
>  https://bugs.openjdk.java.net/browse/JDK-8167161
> 
> -Chris.
> 
> On 04/10/16 13:52, David M. Lloyd wrote:
>> On 10/04/2016 03:58 AM, Langer, Christoph wrote:
>>> Hi,
>>> 
>>> I think I would also support replacing
>>> sun.net.ConnectionResetException with a publicly available
>>> java.net.ConnectionResetException that subclasses
>>> java.net.SocketException. But, as Chris mentions. a usage example
>>> would be helpful.
>> 
>> At present you're going to be hard-pressed to find exiting code that
>> detects a connection reset IOException.  The reason is that, while you
>> *can* search for "Connection reset by peer" in the exception message,
>> that text varies by platform and, more importantly, by locale.  So it is
>> essentially impossible to detect reliably.
>> 
>> But Norman's use case is dead on: a connection reset happens under
>> common circumstances.  In particular, when a TCP socket is closed by the
>> peer before the peer has read all pending data, an RST is returned. This
>> happens very often - when a peer process is terminated for example, or
>> the peer encounters an unrelated exception and terminates the connection
>> abruptly, both of which happen very, very commonly in networked
>> applications.  For servers the choice today is to fill the log with
>> meaningless exception traces or lose potentially more serious problems
>> (EIO for example is something that almost always indicates Bad Things).
>> 
>> I also support this proposal.
>> 
>>>> -Original Message-
>>>> From: net-dev [mailto:net-dev-boun...@openjdk.java.net] On Behalf Of
>>>> Chris
>>>> Hegarty
>>>> Sent: Montag, 12. September 2016 17:07
>>>> To: Florian Weimer <fwei...@redhat.com>; Norman Maurer
>>>> <norman.mau...@googlemail.com>; net-dev@openjdk.java.net
>>>> Subject: Re: Introduce IOException subclass for ECONNRESET
>>>> 
>>>> On 12/09/16 14:50, Florian Weimer wrote:
>>>>> On 08/23/2016 09:40 AM, Norman Maurer wrote:
>>>>>> Hi all,
>>>>>> 
>>>>>> I first asked this on nio-dev[0] but was asked to move this over to
>>>>>> here:
>>>>>> 
>>>>>> I wonder if it would be possible to add a new IOException sub-class
>>>>>> for ECONNRESET. Often you receive these errors if a remote peer closed
>>>>>> the connection and you try to read from it while using NIO. This is
>>>>>> very often not really something that concerns people and can just be
>>>>>> handled the same as a “normal close”.
>>>> 
>>>> So what are the other cases, where ECONNRESET may occur? What is
>>>> equivalent on non-Unix platforms, Windows for example?
>>>> 
>>>> >> At the moment the only way to
>>>>>> detect this is to inspect the IOException message which is really
>>>>>> hacky.
>>>> 
>>>> Do you have examples of code that does this today?
>>>> 
>>>>>> I wonder if we could not add a special IOException sub-class
>>>>>> for this. Something like:
>>>>>> 
>>>>>> ConnectionResetException extends IOException {
>>>>>> }
>>>>> 
>>>>> Shouldn't it be a subclass of SocketException?
>>>> 
>>>> I think it would have to be a subclass of SocketException too, for
>>>> compatibility at least, since that is the type that is thrown
>>>> today.
>>>> 
>>>> sun.net.ConnectionResetException exists today, but I don't think
>>>> that it ever finds its way outside of the implementation. And is
>>>> of course not part of Java SE.
>>>> 
>>>> -Chris.
>>>> 
>>> 
>>