Hi Etienne, ok ... if there's an overflow we definitely should keep an eye on that and see how everything behaves. Right now it should just overflow and start from 0 again ... not sure if we need to manually reset it to something not 0. But I guess Julian would have reported errors with that.
Are your problems related to the overflow? If yes we should dig a little deeper. And regarding size of requests: My maximum I had run PLC4X with was 2600 separate data-points in one request. But that was with the old driver. PLC4X correctly split things up and merged them back together. Not 100% sure the new version is doing it 100% correct as I'm currently working on a new test suite for all drivers (including the S7) Chris Am 28.02.20, 13:59 schrieb "Robinet, Etienne" <[email protected]>: Good afternoon, this is the version I am using. I noticed that the application blocks when the tpduReference is back to 0 (going over the limit of Short). Do you know by any chance if this could be the reason? Was the library already designed to fetch such a number of data in a single run? By browsing a code I saw that this tpduRef is given by the AtomicIntegerwith initial value 10. Here the last correctly handled response: TPKTPacket[payload=COTPPacketData[parameters={},payload=S7MessageResponseData[tpduReference=65534,parameter=S7ParameterReadVarResponse[numItems=1],payload=S7PayloadReadVarResponse[items={S7VarPayloadDataItem[returnCode=OK,transportSize=BYTE_WORD_DWORD,dataLength=16,data={0,-61}]}],errorClass=0,errorCode=0],eot=true,tpduRef=0]] And here the response that blocks: TPKTPacket[payload=COTPPacketData[parameters={},payload=S7MessageResponseData[tpduReference=0,parameter=S7ParameterReadVarResponse[numItems=1],payload=S7PayloadReadVarResponse[items={S7VarPayloadDataItem[returnCode=OK,transportSize=BYTE_WORD_DWORD,dataLength=16,data={0,-61}]}],errorClass=0,errorCode=0],eot=true,tpduRef=0]] Le ven. 28 févr. 2020 à 13:42, Christofer Dutz <[email protected]> a écrit : > Exactly > > Am 28.02.20, 10:37 schrieb "Robinet, Etienne" <[email protected]>: > > By new driver do you mean the ones from the /develop branch > (0.7.0-SNAPSHOT) ? > > Etienne > > Le ven. 28 févr. 2020 à 10:32, Christofer Dutz < > [email protected]> > a écrit : > > > H Etienne, > > > > the old drivers were unfortunately blocking ones ... with the new > drivers > > all requests have timeouts and will get cancelled if no responses > come in > > in a given timeframe. > > > > Chris > > > > Am 28.02.20, 10:12 schrieb "Robinet, Etienne" <[email protected]>: > > > > Well, after 1 hour it froze again. The thing is that this time > it does > > not > > seem to be any memory leakage nor network overloading. The > program just > > stops, and the camel context tells me, when I shut it down, that > there > > is 1 > > inflight exchange. The programm froze at the same point as the > IDE app: > > when the PollingConsumer created an PlcReadRequest and waits for > the > > response. It is known that the receive() method of the > PollingConsumer > > is > > blocking if no message is coming. > > > > Etienne > > > > Le ven. 28 févr. 2020 à 09:34, Etienne Robinet < > [email protected]> a > > écrit : > > > > > Hi there, > > > > > > I am currently testing some fix/workaround. I tried a simple > test > > case > > > like the previous one but I moved the connection to the PLC > outside > > the > > > while loop and only connecting again if the connection drops. > > > Without temp, I don't get memory leaks (it seems) but the IDE > froze > > after > > > +- 60k MessageHandles. > > > But with a 100ms sleep, I managed to get some decent result. I > am > > > currently testing my route again, and TCP ports/ Memory seem > not > > getting > > > overloaded/leaking! > > > > > > I changed the Camel integration by putting the PlcConnection > in the > > > Endpoint class with a getter that can be used by > Consumer/Producer > > to send > > > requests. Idk if this is the correct way to do it, but it > seems it > > kinda > > > fixed/patched for a while my problem. > > > > > > Cheers, > > > > > > Etienne > > > > > > > > > > > > > > > > > >
