Hi Julian, well a little explanation on "costs" of requests for S7 Protocol (but some should apply for all): - First of all: Every address I ask for, produces a cost in the amount of bytes for the request (Each address has to be transferred) - Secondly: Every address I ask for, usually produces a payload in the response - The PDU size is the maximum size of any packet exchanged with the PLC (Request and Response) (If a message exceeds this size, it is ignored, if a request would produce a response that is too big, it will also be ignored) - Depending on the type of S7 device the number of incoming requests is limited (MAX AMQ Parameter). For a S7 1200 for example this is somewhere around 3. So I can only send 3 messages in parallel that are not responded to by the PLC. Any further Requests and the PLC will simply deny them. So they have to be queued and the next message can be sent after one of the previous messages has been responded to. This adds quite a bit of latency to processing a large request, as this is not only limited by network latency but also by the PLC as this doesn't continuously do IO, but has well defined time-slots in it's processing cycle to handle stuff like this. - Processing a lot of little addresses in the PLC costs more processing power than reading a block. As CPU power is very limited and per default the amount of time available for IO is limited to 20% of the Cycle time, so aggregating multiple items might also help reduce the IO time needed.
In one of my last POCs we had 2600 individual addresses being read from a S7 device. This was automatically split up into 35 separate requests by the driver, most of them were queued. I am sure we could have reduced the total processing time quite a bit with optimizations like these: - If I request two addresses, it might produce less bytes on the wire if I ask for one array of items instead (even if there might be data in between that I'm going to ignore) - If I have multiple request items it might be better to rearrange the order (Some request items are larger on the request side as on the response side. By rearranging them the space could be better utilized to maximize the filling of request and response PDU) - Some items would never fit into one PDU (If I read 500 bytes in one item for a S7 1200). Then the item has to be split up to fill multiple PDUs and joined back together. - ... Chris Am 20.12.18, 08:37 schrieb "Julian Feinauer" <j.feina...@pragmaticminds.de>: Hi Greg, I like your approach and in fact we also frequently have the same requirements. Thus, I wrote the scraper module (in the utils submodule) some weeks ago. You give it a config file (currently json or yml) and it starts some background threads to fetch all these values frequently. Below you see one example file which is, except for som naming, pretty equal to your example. Currently this thing is dumb... but I like the idea of optimizing read requests. I see two levels there, first the scraper could "merge" jobs or something and second the driver itself (or a layer above) could make things like caching of values (I implemented that for object relational mapping in the OPM Module, also in utils). As I like Query Planning I would also love to work on such features. But what would be necessary is a good knowledge of the cost of a request. For a S7 for example I have no idea whats expensive.. the number of requests, the size of a request, random read accesses in memory.... Otherwise there is the danger of optimizing something to worse... But, lets keep this discussion as for our scraping scenarios... I definelty see the benefit of this and also like the topic : ) Julian PS.: Would you file a Jira for this? And perhaps we could also create a Confluence page for this topic? ``` sources: S7_TIM: s7://10.10.64.22/0/1 S7_CHRIS: s7://10.10.64.20/0/1 jobs: - name: pressure-job scrapeRate: 10 sources: - S7_TIM fields: pressure: '%DB225:DBW0:INT' - name: another-job scrapeRate: 100 sources: - S7_TIM fields: temperature: '%DB225:DBW0:INT' ``` Am 19.12.18, 05:04 schrieb "Greg Trasuk" <tras...@trasuk.com>: Hello all, Years ago I went through implementing an interface to a couple of Allen-Bradley products, and I faced the same question of how to optimize the reading and writing operations. What I settled on, instead of trying to micro-optimize individual operations, was to implement a “scan list”, which could then be optimized. The idea is to define a set of PLC addresses that need to be read or written on a regular basis. I had an xml definition that looked a lot like this: <plc-link ip-address=“192.168.1.5" port='2222' plc-type='slc5' protocol=‘CSP’ optimize=‘yes'> <scan-set period='45' optimize='false' description='Questionable Addresses-isolated for non-optimized debug'> <read-from-plc> <tag name="Prod.Temp.East" address="F8:7"/> <tag name="Prod.Temp.West" address="F8:6"/> </read-from-plc> </scan-set> <scan-set period="2.1" description="Shift Enable"> <write-to-plc> <tag name="Enable.Shift" address="B3:0/5"/> <tag name="Enable.ProductionShift" address="B3:59/5"/> </write-to-plc> </scan-set> <scan-set period='45' description='SMT Area Operating Data'> <read-from-plc> <tag name="L1.DowntimeMinutes.Shift" address="N100:0"/> <tag name='L1.Uptime' address='N102:0'/> <tag name='L1.UtilizationDowntime' address='N102:1'/> <tag name='L1.TotalUtilizationTime' address='N102:2'/> <tag name='L1.NonUtilizationDowntime' address='N102:3'/> <tag name='L1.NumberOfBreakdowns' address='N102:4'/> <tag name='L1.NumberOfBoards' address='N102:5'/> <tag name='L1.Utilization' address='F108:1'/> <tag name='L1.PercentDowntime' address='F108:3'/> <tag name='L1.MaintenanceResponseAverage' address='F108:4'/> <tag name="L1.MaintenanceResponseMinutes.Shift" address="N100:1"/> <tag name="L1.MaterialShort.Shift" address="N113:0"/> <tag name="L1.Setup.Shift" address="N113:2"/> <tag name="L1.LunchBreak.Shift" address="N113:3"/> <tag name="L1.Meetings.Shift" address="N113:4"/> <tag name="L1.Idle.Shift" address="N113:5"/> <tag name="L1.PM.Shift" address="N113:7"/> <tag name="L1.Breakdown.Shift" address="N113:8"/> <tag name="L1.Other.Shift" address="N113:10"/> </read-from-plc> </scan-set> </plc-link> The client of this scanner gets a reference to a “SharedDataStore” in which they can look up tag values by name, and write data to the tagged value. The data is synchronized to the PLC according to the ‘period’ that’s set on the scan-set. This worked out well, because it represents the intentions of the programmer directly (e.g. read this block of tags every 45 seconds), and then the system can optimize the read/write methods to fulfill that intention. For instance, you can look at the block and coalesce any sequential addresses into a block read rather than a single-address read, if the PLC supports that. On the AB SLC, that turned out to be a quite useful optimization. It’s also possible to flag situations where the scanset targets are not fulfilled (e.g. it took 60 seconds to read the block when the scanset called for 45 seconds). Just something to think about. Cheers, Greg Trasuk > On Dec 18, 2018, at 12:25 PM, Christofer Dutz <christofer.d...@c-ware.de> wrote: > > Hi all, > > right now the S7 driver is the only driver that makes use of dynamically rewriting a message. > > This is for example needed in order to allow reading of large read requests and having the driver split that up into multiple messages. > Ideally it would not only split up the messages, but also optimize the requests by rearranging the order of requested elements or completely rewriting the requests (Replace reading of 10 separate bit addresses by for example simply reading 3 bytes) > > As I’m currently defining some quite generic methods in the protocol-model, I think it would be a better Idea to solve this problem in a more generic way as I believe most optimizations would apply to most protocols. > Also would I like to have this configurable: Here I would imagine that we provide what we think is a good implementation for the individua driver but allow overriding processors by adding alternate implementations to the class-path and allowing to override the processor in the connection-string. > > I was thinking of using the same mechanism (Java Service Lookup) as we are using for the drivers themselves. > > What do you think? > > Chris