Re: [DISCUSS] Possible changes to discriminator field handling

2020-08-25 Thread Otto Fowler
 Can you give examples?

On August 25, 2020 at 19:22:40, Łukasz Dywicki (l...@code-house.org) wrote:

I been trying to write new mspecs and I found that discriminator
handling is quite limiting in current form.

For example it is possible to declare a type argument, but it is not
possible to use it as a divider.
It is also not possible to use a virtual or manual field as a
discriminator which could very well be source for type information. I am
aware that virtual field is computed at runtime with given expression.
I am not entirely sure of all implications for changing the handling.
Currently discriminator is a simple field which gets registered in
discriminator set.

What do you think about making a `discriminator` rather an modifier
which could be placed for type arguments and also in other places? At
the moment any use of [typeSwitch] with undeclared discriminator causes
null pointers in freemarker templates and failure.

Cheers,
Łukasz


Re: Protocol encapsulation

2020-08-25 Thread Otto Fowler
 I would rather have multiple, clean drivers than one that is so
complicated myself.
Maybe if there was a way to structure the mspec’s such that you could
re-use or re-combine them ( ie have multiple mspecs that in some
combinations generate different drivers ).



On August 25, 2020 at 19:11:34, Łukasz Dywicki (l...@code-house.org) wrote:

I come to a point where I need to brainstorm a little bit.

I started my inspection of CANopen and it looks like a whole new thing
which is built on top of CAN. What's worth to remember is fact that
CANbus has many flavors and transport layer can be pretty much anything.
Anything means serial interface (slcan), socketcan, udp (can over
ethernet), wrapping in other transports such as ethercat to end with
vendor specific interfaces and SDKs.
All above result in creation of CAN frames in slightly different
formats. All above interfaces obviously follow all rules which are
relevant for CAN.

Two versions of CAN frame which are standardized are:
- CAN 2.0A with 11 bit identifier and maximum 8 bytes of data - called
sometimes standard frame format (SFF)
- CAN 2.0B using 29 bit sometimes called extended frame format (EFF)
On the bus level EFF identifier is split into 11 bits and then 18 bits.
Data length is also 8 bytes max.
- CAN FD - not entirely clear on identifiers for me, but it allows up to
64 bytes of data. Most of modern cars already uses it.

Reason why I enlist all above is to show you the rich set of transport
options which we did not experience for PLCs so far. Most of protocols
we have so far is bound to one or maximum two transports.
In this case we have standard physical layer, different frames,
different interfaces yet all should lead rather to an unified result.
The CANopen mentioned early in this mail defines OSI layers which could
be seen seen as network, transport, session and application layer.
CANopen is not the only one attempt to standardize payloads, there are
others which are dedicated to narrow areas where standard is used.

Implementation of CANopen has one challenge upfront - it divides 11 bit
identifier into 4 bit function code and 7 bit node identifier. There are
also some attempts to make use of CAN 2.0B/EFF identifiers there,
however my knowledge about that is currently fairly limited.
Problem which I face now is impossibility to implement CANopen without
relying on a specific transport. Given that we have many, multiplied at
some point of time by various frame encoding formats 2.0B/CAN FD we will
quickly end in the most over-bloated CAN interface ever found.

This brings me to following finding for CAN. We have a clear indication
of application layer which is independent from transport itself:
CAN transport -> CAN 2.0A frame -> CANopen

Now in order to realize above we need few things. First of all - I would
say we need to re-parse part of lower layer contents into something new.
It is quite opposite to what we know from IP where everything build
above layer 3 doesn't care about MAC/network layer at all. CANopen do
care, cause it requires access to identifier in order to determine
function code.
I been thinking how we could arrange that. One idea I got so far is
re-wrapping of read buffer and its sections to re-parse it. Second
option I see is sticking to pre-defined CAN frame format.
While socketcan looks like an natural selection for later option it
seems that there is some legitimate reason why so many interface
manufacturers still offer their own APIs to access bus. Forcing longer
than necessary identifiers might also hurt C and embedded stuff Chris is
working on. After all encoding of CAN 2.0A identifiers in 32 bit memory
block takes almost 3 times more than it actually needs.

Looking forward to your comments and opinions.

Cheers,
Łukasz


[DISCUSS] Possible changes to discriminator field handling

2020-08-25 Thread Łukasz Dywicki
I been trying to write new mspecs and I found that discriminator
handling is quite limiting in current form.

For example it is possible to declare a type argument, but it is not
possible to use it as a divider.
It is also not possible to use a virtual or manual field as a
discriminator which could very well be source for type information. I am
aware that virtual field is computed at runtime with given expression.
I am not entirely sure of all implications for changing the handling.
Currently discriminator is a simple field which gets registered in
discriminator set.

What do you think about making a `discriminator` rather an modifier
which could be placed for type arguments and also in other places? At
the moment any use of [typeSwitch] with undeclared discriminator causes
null pointers in freemarker templates and failure.

Cheers,
Łukasz


Protocol encapsulation

2020-08-25 Thread Łukasz Dywicki
I come to a point where I need to brainstorm a little bit.

I started my inspection of CANopen and it looks like a whole new thing
which is built on top of CAN. What's worth to remember is fact that
CANbus has many flavors and transport layer can be pretty much anything.
Anything means serial interface (slcan), socketcan, udp (can over
ethernet), wrapping in other transports such as ethercat to end with
vendor specific interfaces and SDKs.
All above result in creation of CAN frames in slightly different
formats. All above interfaces obviously follow all rules which are
relevant for CAN.

Two versions of CAN frame which are standardized are:
- CAN 2.0A with 11 bit identifier and maximum 8 bytes of data - called
sometimes standard frame format (SFF)
- CAN 2.0B using 29 bit sometimes called extended frame format (EFF)
On the bus level EFF identifier is split into 11 bits and then 18 bits.
Data length is also 8 bytes max.
- CAN FD - not entirely clear on identifiers for me, but it allows up to
64 bytes of data. Most of modern cars already uses it.

Reason why I enlist all above is to show you the rich set of transport
options which we did not experience for PLCs so far. Most of protocols
we have so far is bound to one or maximum two transports.
In this case we have standard physical layer, different frames,
different interfaces yet all should lead rather to an unified result.
The CANopen mentioned early in this mail defines OSI layers which could
be seen seen as network, transport, session and application layer.
CANopen is not the only one attempt to standardize payloads, there are
others which are dedicated to narrow areas where standard is used.

Implementation of CANopen has one challenge upfront - it divides 11 bit
identifier into 4 bit function code and 7 bit node identifier. There are
also some attempts to make use of CAN 2.0B/EFF identifiers there,
however my knowledge about that is currently fairly limited.
Problem which I face now is impossibility to implement CANopen without
relying on a specific transport. Given that we have many, multiplied at
some point of time by various frame encoding formats 2.0B/CAN FD we will
quickly end in the most over-bloated CAN interface ever found.

This brings me to following finding for CAN. We have a clear indication
of application layer which is independent from transport itself:
CAN transport -> CAN 2.0A frame -> CANopen

Now in order to realize above we need few things. First of all - I would
say we need to re-parse part of lower layer contents into something new.
It is quite opposite to what we know from IP where everything build
above layer 3 doesn't care about MAC/network layer at all. CANopen do
care, cause it requires access to identifier in order to determine
function code.
I been thinking how we could arrange that. One idea I got so far is
re-wrapping of read buffer and its sections to re-parse it. Second
option I see is sticking to pre-defined CAN frame format.
While socketcan looks like an natural selection for later option it
seems that there is some legitimate reason why so many interface
manufacturers still offer their own APIs to access bus. Forcing longer
than necessary identifiers might also hurt C and embedded stuff Chris is
working on. After all encoding of CAN 2.0A identifiers in 32 bit memory
block takes almost 3 times more than it actually needs.

Looking forward to your comments and opinions.

Cheers,
Łukasz


Re: Leaking nioEventLoopGroup threads

2020-08-25 Thread Julian Feinauer
Also thanks to you, Stefano, indeed for the nice community work you do!

Julian

Von: Stefano Bossi 
Datum: Dienstag, 25. August 2020 um 20:20
An: , Adam Rossi , Julian Feinauer 

Betreff: Re: Leaking nioEventLoopGroup threads

Yupiii !!!

good news !!!

Regards,
S.

On 25/08/2020 15:45, Adam Rossi wrote:

Juilan,



My apologies - your fix did indeed work. The issue is that

PooledPlcDriverManager does not seem to be calling the close method on the

connection. Switching back to PlcDriverManager from PooledPlcDriverManager

results in your new log comments showing up in the log, and more

importantly no more leaks of the nioEventLoopGroup threads. I have tested

the code in a loop for an hour or so and it is working perfectly.



The PooledPlcDriverManager seems to intercept the close method on the

plcConnection (lines 125 - 130):



if ("close".equals(method.getName())) {

LOGGER.debug("close called on {}", plcConnection);

proxyInvalidated.set(true);

keyedObjectPool.returnObject(poolKey, plcConnection);

return null;

} else {



Which makes sense as it is trying to keep active connections pooled.

However, when this connection is again retrieved from the pool it seems the

plcConnection connects again and creates an additional nioEventLoopGroup

thread, which is never closed.



I am new to working with this project and Jira. It seems to me that I

should close the issue I just created as your fix does indeed correct the

original issue, and perhaps open another issue on PooledPlcDriverManager

for this thread leak?



Regards, Adam



On Mon, Aug 24, 2020 at 4:39 AM Julian Feinauer <

j.feina...@pragmaticminds.de> wrote:



Hi,



short feedback. I looked into the code and indeed it seems that we had a

bug there which could lead tot he socket leak you described.

I pushed a fix in the branch:



https://github.com/apache/plc4x/tree/bugfix/close-eventloop-after-channel



Would you mind taking a look and testing this with your code @Adam Rossi?



Thanks!

Juliasn



Am 24.08.20, 08:26 schrieb "Julian Feinauer" <

j.feina...@pragmaticminds.de>:



Perhaps, some related questions:



- You are using Linux for your Tests?

- Do you close all Connections properly?

Normally the `PlcConnection.close()` method should close the EventLoop.



Julian



Am 24.08.20, 08:23 schrieb "Julian Feinauer" <

j.feina...@pragmaticminds.de>:



Hi Adam,



I will have a look today!



Do we have a Jira Issue for it already?



Julian



Am 24.08.20, 07:38 schrieb "Christofer Dutz" <

christofer.d...@c-ware.de>:



Hi Adam,



of course that's unfortunate ... also I will not be able to

address this issue soon as I have to work on the tasks of my research

project.

I have one more month to work on this and I'm months behind

schedule because I have been doing free support way too much lately.



I really hope Julian will be able to help ... he's way more

into the details of Netty than I am (cause he's got the book ;-) )



So Julian? ... it would be super awesome if you could take on

this issue.



Chris







Am 24.08.20, 00:17 schrieb "Adam Rossi" 
:



Thanks, I did test with 0.8.0-SNAPSHOT and see the same

behavior. In every

plcConnection a nioEventLoopGroup thread is created and

does not ever seem

to be destroyed.



I wish I understood the io.netty.channel.EventLoopGroup

class better to be

more helpful here. Would an example program that

reproduces this thread

leak be useful?



jconsole shows the thread data as:



Name: nioEventLoopGroup-19-1

State: RUNNABLE

Total blocked: 0  Total waited: 0



Stack trace:


java.base@13.0.2/sun.nio.ch.EPoll.wait(Native
 Method)

java.base@13.0.2



/sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:120)

java.base@13.0.2



/sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:124)

   - locked

io.netty.channel.nio.SelectedSelectionKeySet@1838f97

   - locked sun.nio.ch.EPollSelectorImpl@1f49287

java.base@13.0.2/sun.nio.ch

.SelectorImpl.select(SelectorImpl.java:141)



app//io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68)



app//io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)


Re: Leaking nioEventLoopGroup threads

2020-08-25 Thread Stefano Bossi
Yupiii !!!

good news !!!

Regards,
S.


On 25/08/2020 15:45, Adam Rossi wrote:
> Juilan,
>
> My apologies - your fix did indeed work. The issue is that
> PooledPlcDriverManager does not seem to be calling the close method on the
> connection. Switching back to PlcDriverManager from PooledPlcDriverManager
> results in your new log comments showing up in the log, and more
> importantly no more leaks of the nioEventLoopGroup threads. I have tested
> the code in a loop for an hour or so and it is working perfectly.
>
> The PooledPlcDriverManager seems to intercept the close method on the
> plcConnection (lines 125 - 130):
>
> if ("close".equals(method.getName())) {
> LOGGER.debug("close called on {}", plcConnection);
> proxyInvalidated.set(true);
> keyedObjectPool.returnObject(poolKey, plcConnection);
> return null;
> } else {
>
> Which makes sense as it is trying to keep active connections pooled.
> However, when this connection is again retrieved from the pool it seems the
> plcConnection connects again and creates an additional nioEventLoopGroup
> thread, which is never closed.
>
> I am new to working with this project and Jira. It seems to me that I
> should close the issue I just created as your fix does indeed correct the
> original issue, and perhaps open another issue on PooledPlcDriverManager
> for this thread leak?
>
> Regards, Adam
>
> On Mon, Aug 24, 2020 at 4:39 AM Julian Feinauer <
> j.feina...@pragmaticminds.de> wrote:
>
>> Hi,
>>
>> short feedback. I looked into the code and indeed it seems that we had a
>> bug there which could lead tot he socket leak you described.
>> I pushed a fix in the branch:
>>
>> https://github.com/apache/plc4x/tree/bugfix/close-eventloop-after-channel
>>
>> Would you mind taking a look and testing this with your code @Adam Rossi?
>>
>> Thanks!
>> Juliasn
>>
>> Am 24.08.20, 08:26 schrieb "Julian Feinauer" <
>> j.feina...@pragmaticminds.de>:
>>
>> Perhaps, some related questions:
>>
>> - You are using Linux for your Tests?
>> - Do you close all Connections properly?
>> Normally the `PlcConnection.close()` method should close the EventLoop.
>>
>> Julian
>>
>> Am 24.08.20, 08:23 schrieb "Julian Feinauer" <
>> j.feina...@pragmaticminds.de>:
>>
>> Hi Adam,
>>
>> I will have a look today!
>>
>> Do we have a Jira Issue for it already?
>>
>> Julian
>>
>> Am 24.08.20, 07:38 schrieb "Christofer Dutz" <
>> christofer.d...@c-ware.de>:
>>
>> Hi Adam,
>>
>> of course that's unfortunate ... also I will not be able to
>> address this issue soon as I have to work on the tasks of my research
>> project.
>> I have one more month to work on this and I'm months behind
>> schedule because I have been doing free support way too much lately.
>>
>> I really hope Julian will be able to help ... he's way more
>> into the details of Netty than I am (cause he's got the book ;-) )
>>
>> So Julian? ... it would be super awesome if you could take on
>> this issue.
>>
>> Chris
>>
>>
>>
>> Am 24.08.20, 00:17 schrieb "Adam Rossi" :
>>
>> Thanks, I did test with 0.8.0-SNAPSHOT and see the same
>> behavior. In every
>> plcConnection a nioEventLoopGroup thread is created and
>> does not ever seem
>> to be destroyed.
>>
>> I wish I understood the io.netty.channel.EventLoopGroup
>> class better to be
>> more helpful here. Would an example program that
>> reproduces this thread
>> leak be useful?
>>
>> jconsole shows the thread data as:
>>
>> Name: nioEventLoopGroup-19-1
>> State: RUNNABLE
>> Total blocked: 0  Total waited: 0
>>
>> Stack trace:
>> java.base@13.0.2/sun.nio.ch.EPoll.wait(Native Method)
>> java.base@13.0.2
>>
>> /sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:120)
>> java.base@13.0.2
>>
>> /sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:124)
>>- locked
>> io.netty.channel.nio.SelectedSelectionKeySet@1838f97
>>- locked sun.nio.ch.EPollSelectorImpl@1f49287
>> java.base@13.0.2/sun.nio.ch
>> .SelectorImpl.select(SelectorImpl.java:141)
>>
>> app//io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68)
>>
>> app//io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
>>
>> app//io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
>>
>> app//io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
>>
>> app//io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>>
>> 

Re: Leaking nioEventLoopGroup threads

2020-08-25 Thread Julian Feinauer
Hi Adam,

sorry form y late response and in facgt thanks for your feedback, happy to hear!
I will merge my branch and close the issue, no worries.

And you are exactly right, just log another issue fort he Pool and we hope that 
someone may find the time to have a look there : )

Julian

PS.: As you already digged into the code you can try to fix it yourself of 
course, I would be more than happy to get a PR from you : )

Von: Adam Rossi 
Datum: Dienstag, 25. August 2020 um 15:45
An: Julian Feinauer 
Cc: "dev@plc4x.apache.org" 
Betreff: Re: Leaking nioEventLoopGroup threads

Juilan,

My apologies - your fix did indeed work. The issue is that 
PooledPlcDriverManager does not seem to be calling the close method on the 
connection. Switching back to PlcDriverManager from PooledPlcDriverManager 
results in your new log comments showing up in the log, and more importantly no 
more leaks of the nioEventLoopGroup threads. I have tested the code in a loop 
for an hour or so and it is working perfectly.

The PooledPlcDriverManager seems to intercept the close method on the 
plcConnection (lines 125 - 130):

if ("close".equals(method.getName())) {
LOGGER.debug("close called on {}", plcConnection);
proxyInvalidated.set(true);
keyedObjectPool.returnObject(poolKey, plcConnection);
return null;
} else {

Which makes sense as it is trying to keep active connections pooled. However, 
when this connection is again retrieved from the pool it seems the 
plcConnection connects again and creates an additional nioEventLoopGroup 
thread, which is never closed.

I am new to working with this project and Jira. It seems to me that I should 
close the issue I just created as your fix does indeed correct the original 
issue, and perhaps open another issue on PooledPlcDriverManager for this thread 
leak?

Regards, Adam

On Mon, Aug 24, 2020 at 4:39 AM Julian Feinauer 
mailto:j.feina...@pragmaticminds.de>> wrote:
Hi,

short feedback. I looked into the code and indeed it seems that we had a  bug 
there which could lead tot he socket leak you described.
I pushed a fix in the branch:

https://github.com/apache/plc4x/tree/bugfix/close-eventloop-after-channel

Would you mind taking a look and testing this with your code @Adam Rossi?

Thanks!
Juliasn

Am 24.08.20, 08:26 schrieb "Julian Feinauer" 
mailto:j.feina...@pragmaticminds.de>>:

Perhaps, some related questions:

- You are using Linux for your Tests?
- Do you close all Connections properly?
Normally the `PlcConnection.close()` method should close the EventLoop.

Julian

Am 24.08.20, 08:23 schrieb "Julian Feinauer" 
mailto:j.feina...@pragmaticminds.de>>:

Hi Adam,

I will have a look today!

Do we have a Jira Issue for it already?

Julian

Am 24.08.20, 07:38 schrieb "Christofer Dutz" 
mailto:christofer.d...@c-ware.de>>:

Hi Adam,

of course that's unfortunate ... also I will not be able to address 
this issue soon as I have to work on the tasks of my research project.
I have one more month to work on this and I'm months behind 
schedule because I have been doing free support way too much lately.

I really hope Julian will be able to help ... he's way more into 
the details of Netty than I am (cause he's got the book ;-) )

So Julian? ... it would be super awesome if you could take on this 
issue.

Chris



Am 24.08.20, 00:17 schrieb "Adam Rossi" 
mailto:ac.ro...@gmail.com>>:

Thanks, I did test with 0.8.0-SNAPSHOT and see the same 
behavior. In every
plcConnection a nioEventLoopGroup thread is created and does 
not ever seem
to be destroyed.

I wish I understood the io.netty.channel.EventLoopGroup class 
better to be
more helpful here. Would an example program that reproduces 
this thread
leak be useful?

jconsole shows the thread data as:

Name: nioEventLoopGroup-19-1
State: RUNNABLE
Total blocked: 0  Total waited: 0

Stack trace:

java.base@13.0.2/sun.nio.ch.EPoll.wait(Native Method)
java.base@13.0.2

/sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:120)
java.base@13.0.2
/sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:124)
   - locked io.netty.channel.nio.SelectedSelectionKeySet@1838f97
   - locked sun.nio.ch.EPollSelectorImpl@1f49287

java.base@13.0.2/sun.nio.ch.SelectorImpl.select(SelectorImpl.java:141)

app//io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68)


Re: Leaking nioEventLoopGroup threads

2020-08-25 Thread Adam Rossi
Juilan,

My apologies - your fix did indeed work. The issue is that
PooledPlcDriverManager does not seem to be calling the close method on the
connection. Switching back to PlcDriverManager from PooledPlcDriverManager
results in your new log comments showing up in the log, and more
importantly no more leaks of the nioEventLoopGroup threads. I have tested
the code in a loop for an hour or so and it is working perfectly.

The PooledPlcDriverManager seems to intercept the close method on the
plcConnection (lines 125 - 130):

if ("close".equals(method.getName())) {
LOGGER.debug("close called on {}", plcConnection);
proxyInvalidated.set(true);
keyedObjectPool.returnObject(poolKey, plcConnection);
return null;
} else {

Which makes sense as it is trying to keep active connections pooled.
However, when this connection is again retrieved from the pool it seems the
plcConnection connects again and creates an additional nioEventLoopGroup
thread, which is never closed.

I am new to working with this project and Jira. It seems to me that I
should close the issue I just created as your fix does indeed correct the
original issue, and perhaps open another issue on PooledPlcDriverManager
for this thread leak?

Regards, Adam

On Mon, Aug 24, 2020 at 4:39 AM Julian Feinauer <
j.feina...@pragmaticminds.de> wrote:

> Hi,
>
> short feedback. I looked into the code and indeed it seems that we had a
> bug there which could lead tot he socket leak you described.
> I pushed a fix in the branch:
>
> https://github.com/apache/plc4x/tree/bugfix/close-eventloop-after-channel
>
> Would you mind taking a look and testing this with your code @Adam Rossi?
>
> Thanks!
> Juliasn
>
> Am 24.08.20, 08:26 schrieb "Julian Feinauer" <
> j.feina...@pragmaticminds.de>:
>
> Perhaps, some related questions:
>
> - You are using Linux for your Tests?
> - Do you close all Connections properly?
> Normally the `PlcConnection.close()` method should close the EventLoop.
>
> Julian
>
> Am 24.08.20, 08:23 schrieb "Julian Feinauer" <
> j.feina...@pragmaticminds.de>:
>
> Hi Adam,
>
> I will have a look today!
>
> Do we have a Jira Issue for it already?
>
> Julian
>
> Am 24.08.20, 07:38 schrieb "Christofer Dutz" <
> christofer.d...@c-ware.de>:
>
> Hi Adam,
>
> of course that's unfortunate ... also I will not be able to
> address this issue soon as I have to work on the tasks of my research
> project.
> I have one more month to work on this and I'm months behind
> schedule because I have been doing free support way too much lately.
>
> I really hope Julian will be able to help ... he's way more
> into the details of Netty than I am (cause he's got the book ;-) )
>
> So Julian? ... it would be super awesome if you could take on
> this issue.
>
> Chris
>
>
>
> Am 24.08.20, 00:17 schrieb "Adam Rossi" :
>
> Thanks, I did test with 0.8.0-SNAPSHOT and see the same
> behavior. In every
> plcConnection a nioEventLoopGroup thread is created and
> does not ever seem
> to be destroyed.
>
> I wish I understood the io.netty.channel.EventLoopGroup
> class better to be
> more helpful here. Would an example program that
> reproduces this thread
> leak be useful?
>
> jconsole shows the thread data as:
>
> Name: nioEventLoopGroup-19-1
> State: RUNNABLE
> Total blocked: 0  Total waited: 0
>
> Stack trace:
> java.base@13.0.2/sun.nio.ch.EPoll.wait(Native Method)
> java.base@13.0.2
>
> /sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:120)
> java.base@13.0.2
>
> /sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:124)
>- locked
> io.netty.channel.nio.SelectedSelectionKeySet@1838f97
>- locked sun.nio.ch.EPollSelectorImpl@1f49287
> java.base@13.0.2/sun.nio.ch
> .SelectorImpl.select(SelectorImpl.java:141)
>
> app//io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68)
>
> app//io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
>
> app//io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
>
> app//io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
>
> app//io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>
> app//io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
> java.base@13.0.2/java.lang.Thread.run(Thread.java:830)
>
>
> Regards, Adam
>
> On Sun, Aug 23, 2020 at 1:00 PM Christofer Dutz <
> christofer.d...@c-ware.de>
> wrote:
>
>  

[jira] [Created] (PLC4X-242) nioEventLoopGroup thread leak

2020-08-25 Thread Adam Rossi (Jira)
Adam Rossi created PLC4X-242:


 Summary: nioEventLoopGroup thread leak
 Key: PLC4X-242
 URL: https://issues.apache.org/jira/browse/PLC4X-242
 Project: Apache PLC4X
  Issue Type: Bug
  Components: API
Affects Versions: 0.7.0, 0.8.0
 Environment: Linux Raspbian Buster 10.0 / OpenJDK 13
Reporter: Adam Rossi


Here is some log output from my program. You can see that the close is called 
on the DefaultNettyPlcConnection, but I do not see the output from the code 
modification that I would expect if the closeEventLoopForChannel method was 
being called (Either logger.info("Channel is closed, closing worker Group 
also") or logger.warn("Trying to remove EventLoop for Channel {} but have none 
stored", channel).
 
The nioEventLoopGroup threads continue to persist after every plcConnection. 
 
I hope I have built everything correctly...I checked out your branch with:
 
{quote}git clone --single-branch --branch bugfix/close-eventloop-after-channel 
[https://github.com/apache/plc4x.git]
 {quote}And wiped out my local m2 maven repository before building the code 
with:
 
{quote}./mvnw install -DskipTests
 {quote}I also removed references to the apache snapshot repo from my project 
pom and by all appearances I am using the correct 0.8.0-SNAPSHOT jars that are 
locally built in my local m2 repo. Here is some log info from my test:
 


 
{quote}2020-08-24_10:15:43.450 DEBUG PooledPlcDriverManager - Try to borrow an 
object for url 
modbus://[192.168.0.5:503?unit-identifier=50|http://192.168.0.5:503/?unit-identifier=50]
2020-08-24_10:15:43.452 INFO  TcpChannelFactory - Configuring Bootstrap with 
Configuration{}
2020-08-24_10:15:43.458 DEBUG ModbusManager - Connection Metadata:
2020-08-24_10:15:43.459 DEBUG ModbusManager - 
org.apache.plc4x.java.spi.connection.DefaultNettyPlcConnection@185c140
2020-08-24_10:15:43.462 DEBUG Plc4xNettyWrapper - Forwarding request to plc 
ModbusTcpADU[transactionIdentifier=10,unitIdentifier=50,pdu=ModbusPDUReadHoldingRegistersRequest[startingAddress=67,quantity=1]]
2020-08-24_10:15:43.466 DEBUG GeneratedDriverByteToMessageCodec - Sending bytes 
to PLC for message 
ModbusTcpADU[transactionIdentifier=10,unitIdentifier=50,pdu=ModbusPDUReadHoldingRegistersRequest[startingAddress=67,quantity=1]]
 as data 000a0006320300430001
2020-08-24_10:15:43.470 DEBUG Plc4xNettyWrapper - Forwarding request to plc 
ModbusTcpADU[transactionIdentifier=11,unitIdentifier=50,pdu=ModbusPDUReadHoldingRegistersRequest[startingAddress=69,quantity=1]]
2020-08-24_10:15:43.471 DEBUG GeneratedDriverByteToMessageCodec - Sending bytes 
to PLC for message 
ModbusTcpADU[transactionIdentifier=11,unitIdentifier=50,pdu=ModbusPDUReadHoldingRegistersRequest[startingAddress=69,quantity=1]]
 as data 000b0006320300450001
2020-08-24_10:15:43.480 DEBUG Plc4xNettyWrapper - Forwarding request to plc 
ModbusTcpADU[transactionIdentifier=12,unitIdentifier=50,pdu=ModbusPDUReadHoldingRegistersRequest[startingAddress=68,quantity=1]]
2020-08-24_10:15:43.481 DEBUG GeneratedDriverByteToMessageCodec - Sending bytes 
to PLC for message 
ModbusTcpADU[transactionIdentifier=12,unitIdentifier=50,pdu=ModbusPDUReadHoldingRegistersRequest[startingAddress=68,quantity=1]]
 as data 000c0006320300440001
2020-08-24_10:15:43.484 DEBUG Plc4xNettyWrapper - Forwarding request to plc 
ModbusTcpADU[transactionIdentifier=13,unitIdentifier=50,pdu=ModbusPDUReadHoldingRegistersRequest[startingAddress=66,quantity=1]]
2020-08-24_10:15:43.485 DEBUG GeneratedDriverByteToMessageCodec - Sending bytes 
to PLC for message 
ModbusTcpADU[transactionIdentifier=13,unitIdentifier=50,pdu=ModbusPDUReadHoldingRegistersRequest[startingAddress=66,quantity=1]]
 as data 000d0006320300420001
2020-08-24_10:15:43.489 DEBUG PooledPlcDriverManager - close called on 
org.apache.plc4x.java.spi.connection.DefaultNettyPlcConnection@185c140
2020-08-24_10:15:43.490 INFO  ReadModbusTask - Read Modbus Task Completed.
 {quote}Some details from my code:
 
Getting the connection:
{quote}PlcConnection plcConnection = 
pooledDriverManager.getConnection(modbusServerURI);
if (plcConnection.isConnected()) {
                LOG.trace("The connection is connected");
 } else {
                LOG.trace("The connection is not connected. Connecting now...");
                plcConnection.connect();
 }
 {quote}Reading the plc and closing the connection:
 
{quote}PlcReadRequest.Builder builder = plcConnection.readRequestBuilder();
builder.addItem("devicename", "holding-register:1[8]");
builder.addItem("generatorstate", "holding-register:67");
builder.addItem("batteryvoltage", "holding-register:513[2]");
builder.addItem("batterycurrent", "holding-register:515[2]");
builder.addItem("pvpowerwatts", "holding-register:69[2]");
builder.addItem("pvinputthishourkwh", "holding-register:307[2]");
PlcReadRequest readRequest = builder.build();
PlcReadResponse response;
            try 

Re: Leaking nioEventLoopGroup threads

2020-08-25 Thread Stefano Bossi
Hi Adam,

it's much better to not forget your problem if you open a new ticket on
Jira.

here is the link: https://issues.apache.org/jira/projects/PLC4X/summary

You should open an account a copy your mail in there.

Just a suggestion to keep the things in order.

Thanks,
S.


On 24/08/2020 17:26, Adam Rossi wrote:
> Julian, thank you so much for your attention on this. Your code changes
> look good but unfortunately I am still experiencing the problem.
>
> Here is some log output from my program. You can see that the close is
> called on the DefaultNettyPlcConnection, but I do not see the output from
> your code modification that I would expect if the closeEventLoopForChannel
> method was being called (Either logger.info("Channel is closed, closing
> worker Group also") or logger.warn("Trying to remove EventLoop for Channel
> {} but have none stored", channel).
>
> The nioEventLoopGroup threads continue to persist after every
> plcConnection.
>
> I hope I have built everything correctly...I checked out your branch with:
>
> git clone --single-branch --branch bugfix/close-eventloop-after-channel
> https://github.com/apache/plc4x.git
>
> And wiped out my local m2 maven repository before building the code with:
>
> ./mvnw install -DskipTests
>
> I also removed references to the apache snapshot repo from my project pom
> and by all appearances I am using the correct 0.8.0-SNAPSHOT jars that are
> locally built in my local m2 repo. Here is some log info from my test:
>
>
>
>
> 2020-08-24_10:15:43.450 DEBUG PooledPlcDriverManager - Try to borrow an
> object for url modbus://192.168.0.5:503?unit-identifier=50
> 2020-08-24_10:15:43.452 INFO  TcpChannelFactory - Configuring Bootstrap
> with Configuration{}
> 2020-08-24_10:15:43.458 DEBUG ModbusManager - Connection Metadata:
> 2020-08-24_10:15:43.459 DEBUG ModbusManager -
> org.apache.plc4x.java.spi.connection.DefaultNettyPlcConnection@185c140
> 2020-08-24_10:15:43.462 DEBUG Plc4xNettyWrapper - Forwarding request to plc
> ModbusTcpADU[transactionIdentifier=10,unitIdentifier=50,pdu=ModbusPDUReadHoldingRegistersRequest[startingAddress=67,quantity=1]]
> 2020-08-24_10:15:43.466 DEBUG GeneratedDriverByteToMessageCodec - Sending
> bytes to PLC for message
> ModbusTcpADU[transactionIdentifier=10,unitIdentifier=50,pdu=ModbusPDUReadHoldingRegistersRequest[startingAddress=67,quantity=1]]
> as data 000a0006320300430001
> 2020-08-24_10:15:43.470 DEBUG Plc4xNettyWrapper - Forwarding request to plc
> ModbusTcpADU[transactionIdentifier=11,unitIdentifier=50,pdu=ModbusPDUReadHoldingRegistersRequest[startingAddress=69,quantity=1]]
> 2020-08-24_10:15:43.471 DEBUG GeneratedDriverByteToMessageCodec - Sending
> bytes to PLC for message
> ModbusTcpADU[transactionIdentifier=11,unitIdentifier=50,pdu=ModbusPDUReadHoldingRegistersRequest[startingAddress=69,quantity=1]]
> as data 000b0006320300450001
> 2020-08-24_10:15:43.480 DEBUG Plc4xNettyWrapper - Forwarding request to plc
> ModbusTcpADU[transactionIdentifier=12,unitIdentifier=50,pdu=ModbusPDUReadHoldingRegistersRequest[startingAddress=68,quantity=1]]
> 2020-08-24_10:15:43.481 DEBUG GeneratedDriverByteToMessageCodec - Sending
> bytes to PLC for message
> ModbusTcpADU[transactionIdentifier=12,unitIdentifier=50,pdu=ModbusPDUReadHoldingRegistersRequest[startingAddress=68,quantity=1]]
> as data 000c0006320300440001
> 2020-08-24_10:15:43.484 DEBUG Plc4xNettyWrapper - Forwarding request to plc
> ModbusTcpADU[transactionIdentifier=13,unitIdentifier=50,pdu=ModbusPDUReadHoldingRegistersRequest[startingAddress=66,quantity=1]]
> 2020-08-24_10:15:43.485 DEBUG GeneratedDriverByteToMessageCodec - Sending
> bytes to PLC for message
> ModbusTcpADU[transactionIdentifier=13,unitIdentifier=50,pdu=ModbusPDUReadHoldingRegistersRequest[startingAddress=66,quantity=1]]
> as data 000d0006320300420001
> 2020-08-24_10:15:43.489 DEBUG PooledPlcDriverManager - close called on
> org.apache.plc4x.java.spi.connection.DefaultNettyPlcConnection@185c140
> 2020-08-24_10:15:43.490 INFO  ReadModbusTask - Read Modbus Task Completed.
>
> Some details from my code:
>
> Getting the connection:
>
> PlcConnection plcConnection =
> pooledDriverManager.getConnection(modbusServerURI);
> if (plcConnection.isConnected()) {
> LOG.trace("The connection is connected");
>  } else {
> LOG.trace("The connection is not connected. Connecting
> now...");
> plcConnection.connect();
>  }
>
> Reading the plc and closing the connection:
>
> PlcReadRequest.Builder builder = plcConnection.readRequestBuilder();
> builder.addItem("devicename", "holding-register:1[8]");
> builder.addItem("generatorstate", "holding-register:67");
> builder.addItem("batteryvoltage", "holding-register:513[2]");
> builder.addItem("batterycurrent", "holding-register:515[2]");
> builder.addItem("pvpowerwatts", "holding-register:69[2]");
> builder.addItem("pvinputthishourkwh", "holding-register:307[2]");
> PlcReadRequest readRequest = builder.build();
> 

[GitHub] [plc4x] chrisdutz opened a new pull request #181: Feature/plc4c

2020-08-25 Thread GitBox


chrisdutz opened a new pull request #181:
URL: https://github.com/apache/plc4x/pull/181


   Working on implementing the S7 and the Modbus driver using the generated 
code.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: Re: plc4j in osgi env

2020-08-25 Thread 刘存杰

Hi Chris, i will try the Kar bundles of the driver

Many thanks,
Jeff

> -原始邮件-
> 发件人: "Christofer Dutz" 
> 发送时间: 2020-08-25 14:40:21 (星期二)
> 收件人: "dev@plc4x.apache.org" 
> 抄送: 
> 主题: Re: plc4j in osgi env
> 
> You could try out the Kar bundles every driver should produce. Not 100% sure 
> the 0.7.0 already had that feature, but the 0.8.0-SNAPSHOT definitely should.
> 
> Chris
> 
> Von: 刘存杰 
> Gesendet: Dienstag, 25. August 2020 06:53
> An: dev@plc4x.apache.org 
> Betreff: plc4j in osgi env
> 
> Hello all together
> 
> I developed a small application and run in OSGi Env. with plc4j modbus driver 
> 0.7.0, I try to do sth list below
> 
> 1、I package all dependencies into a fat jar, and exception "Unable to find 
> driver for protocol modbus", driver not registed
> 
> 2、 i try
> PlcConnection plcConnection = new 
> ModbusDriver().getConnection(connectionString)
> 
> plcConnection.connect();
> and exception "Unimplement transport tcp"
> 
> 
> 
> 
> 3、i package the dependencies into multi bundles, and the 
> org.apache.org.plc4x.java.osgi package missing
> 
> What can i use this project in osgi env.?
> 
> Thanks and Regards, Jeff