I can't say more than others, but based on my own experiences I can tell
you that some exceptions are swallowed by Netty or surrounding classes
resulting in dead connections.
When I was working on socketcan transport its initialization failures lead
to connections which were allowing writes but never actually emitted
anything to the wire.

Based on stack trace - are you using TCP? Is connection initialized
properly? Some devices could refuse more than one connection at the time.
Also as far I know generated drivers must register message kinds to be
initialized properly. Netty pipeline is then configured with plc4x codec
handler.

Best,
Łukasz

śr., 16 wrz 2020 o 18:23 Vladyslav Milutin <v.milu...@aegas.io> napisał(a):

> Hi Stefano,
>
> Driver designed for custom IoT devices, driver itself just overrides
> abstract methods like protocol(), canWrite(), canRead(),
> getConfiguration().  It's not doing any logic.
>
> I think PooledDriverManager can help, but will it check the connection
> itself, not just when the getConnection() method is called?
>
>
> On 16 Sep 2020, at 19:07, Stefano Bossi <stefano.bo...@gmail.com> wrote:
>
>  Hi Vladyslav,
>
> just because a I am a curious guy, why did you choose to build a custom
> driver?
>
> Do you need something special ?
>
> Regards,
> Stefano
>
> P.S. feel free to answer: it's not your business !!!! As said is just a
> curiosity, anyone has the right to choose it's road.
>
>
>
> On 16/09/2020 17:54, Christofer Dutz wrote:
>
> Hi Vladyslav,
>
> oh ... a custom driver. In that case it will definitely be tricky to
> help you unless we can have a look at the code.
>
> Is this something you consider bringing into the PLC4X project, or
> something that's meant to stay outside of the project?
>
> I guess this is the first time such a question has come up ;-)
>
> With integrations, I was referring to: Camel, Kafka, Edgent, NiFi, ...
> integrations that the PLC4X provides. But I guess you answered the
> question and you're not using any of them.
>
> The connection pool does a little more. Before returning a connection
> it checks if it's still alive and if it's not, it creates a new one.
>
> Chris
>
>
>
> Am 16.09.20, 17:39 schrieb "Vladyslav Milutin" <v.milu...@aegas.io>
> <v.milu...@aegas.io>:
>
>     Hi Christofer,
>
>     Thanks for your quick response.
>     I'm using a custom driver which extends GeneratedDriverBase, for
> connection
>     I use a simple call to .connect, I know that you have
> PooledDriverManager,
>     but it won't have the same issue if connection was reset by peer, since
>     it's just look up for the specific connection?
>     As
>     integrations: plc4j-transport-tcp, plc4j-api, plc4j-spi,
> plc4j-connection-pool
>     and other code generation and build utils. Or by integration you mean
>     frameworks? If yes, Spring Frameworks.
>
>     Kind regards,
>     Vlad
>
>     ср, 16 сент. 2020 г. в 17:28, Christofer Dutz
> <christofer.d...@c-ware.de> <christofer.d...@c-ware.de>:
>
>     > Hi Vladyslav,
>     >
>     > could you please tell us which driver and which version you are
> using?
>     > Also it would be interesting if you are using any integration
> modules?
>     >
>     > Chris
>     >
>     > Am 16.09.20, 14:36 schrieb "Vladyslav Milutin"
> <v.milu...@aegas.io> <v.milu...@aegas.io>:
>     >
>     >     Hello guys,
>     >
>     >     I'm writing to you with a hope that you can help me with
> exception
>     >     handling.
>     >     Currently after a long time connection can be reset by peer. See
>     > stacktrace
>     >     below.
>     >
>     >     I've tried to add a custom ChannelHandler which Overrides
>     > exceptionCaught()
>     >     and add it in Driver#initializePipeline() see code below. Also
> has
>     > tried to
>     >     add a channel that can be obtained from
> DefaultNettyPlcConnection. And
>     > none
>     >     of them actualy was added to the pipeline where this exception
> was
>     > thrown.
>     >
>     >     plc4x version: 0.7.0
>     >
>     >     StatckTrace:
>     >     2020-09-16 13:50:03.340 WARN  [nioEventLoopGroup-58-1]
>     >     [io.netty.channel.DefaultChannelPipeline]
> onUnhandledInboundException
>     > - An
>     >     exceptionCaught() event was fired, and it reached at the tail of
> the
>     >     pipeline. It usually means the last handler in the pipeline did
> not
>     > handle
>     >     the exception.
>     >     java.io.IOException: Connection reset by peer
>     >       at java.base/sun.nio.ch.FileDispatcherImpl.read0(Native
> Method)
>     >       at java.base/sun.nio.ch
>     > .SocketDispatcher.read(SocketDispatcher.java:39)
>     >       at
> java.base/sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:276)
>     >       at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:233)
>     >       at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:223)
>     >       at java.base/sun.nio.ch
>     > .SocketChannelImpl.read(SocketChannelImpl.java:358)
>     >       at
> io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253)
>     >       at
>     > io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133)
>     >       at
>     >
>     >
> io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350)
>     >       at
>     >
>     >
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148)
>     >       at
>     >
>     >
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
>     >       at
>     >
>     >
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
>     >       at
>     >
>     >
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
>     >       at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
>     >       at
>     >
>     >
> io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
>     >       at
>     >
>     >
> io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>     >       at
>     >
>     >
> io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>     >       at java.base/java.lang.Thread.run(Thread.java:834)
>     >
>     >     Driver#initializePipeline:
>     >             try {
>     >                 final Channel channel =
>     >     channelFactory.createChannel(this.handler);
>     >
>  channelFactory.initializePipeline(channel.pipeline());
>     >
>     >             } catch (PlcConnectionException e) {
>     >                 log.error("Failed to create channel");
>     >             }
>     >
>     >     ChannelHandler:
>     >         @Override
>     >         public void exceptionCaught(ChannelHandlerContext ctx,
> Throwable
>     > cause)
>     >     {
>     >             log.warn("ExceptionCaught in worker: ctx = [{}], cause =
> [{},
>     > {}],
>     >     workerName = [{}]",
>     >                     ctx, cause.getClass(), cause.getMessage(),
> workerName);
>     >             if (cause instanceof ConnectTimeoutException) {
>     >                 log.warn("ConnectionTimeout caught: workerName =
> [{}]",
>     >     workerName);
>     >             }
>     >             if ((cause instanceof IOException) &&
>     >     cause.getMessage().contains("Connection reset by peer")) {
>     >                 log.warn("Connection reset by peer caught:
> workerName =
>     > [{}]",
>     >     workerName);
>     >             } else {
>     >                 log.info("Unexpected exception caught: workerName =
> [{}]",
>     >     workerName);
>     >             }
>     >
>     >             this.callback.accept(cause);
>     >         }
>     >
>     >     DefaultNettyPlcConnection#channel:
>     >     log.info("Trying to get connection channel: worker name = [{}]",
>     >     this.workerName);
>     >             final Channel channel = ((DefaultNettyPlcConnection)
>     >     this.connection).getChannel();
>     >             log.info("Channel obtained successfully. Adding custom
>     >     channelHandler to it: channel = [{}], workerName = [{}]",
> channel,
>     >     this.workerName);
>     >             channel.pipeline().addLast(this.channelHandler);
>     >             log.info("ChannelHandler added: channel = [{}],
>     > channelHandler =
>     >     [{}], workerName = [{}]", channel, this.channelHandler,
>     > this.workerName);
>     >
>     >     Kind regards,
>     >     Vlad
>     >
>     >
>

Reply via email to