Hi Greg, I like your approach and in fact we also frequently have the same requirements. Thus, I wrote the scraper module (in the utils submodule) some weeks ago. You give it a config file (currently json or yml) and it starts some background threads to fetch all these values frequently. Below you see one example file which is, except for som naming, pretty equal to your example.
Currently this thing is dumb... but I like the idea of optimizing read requests. I see two levels there, first the scraper could "merge" jobs or something and second the driver itself (or a layer above) could make things like caching of values (I implemented that for object relational mapping in the OPM Module, also in utils). As I like Query Planning I would also love to work on such features. But what would be necessary is a good knowledge of the cost of a request. For a S7 for example I have no idea whats expensive.. the number of requests, the size of a request, random read accesses in memory.... Otherwise there is the danger of optimizing something to worse... But, lets keep this discussion as for our scraping scenarios... I definelty see the benefit of this and also like the topic : ) Julian PS.: Would you file a Jira for this? And perhaps we could also create a Confluence page for this topic? ``` sources: S7_TIM: s7://10.10.64.22/0/1 S7_CHRIS: s7://10.10.64.20/0/1 jobs: - name: pressure-job scrapeRate: 10 sources: - S7_TIM fields: pressure: '%DB225:DBW0:INT' - name: another-job scrapeRate: 100 sources: - S7_TIM fields: temperature: '%DB225:DBW0:INT' ``` Am 19.12.18, 05:04 schrieb "Greg Trasuk" <tras...@trasuk.com>: Hello all, Years ago I went through implementing an interface to a couple of Allen-Bradley products, and I faced the same question of how to optimize the reading and writing operations. What I settled on, instead of trying to micro-optimize individual operations, was to implement a “scan list”, which could then be optimized. The idea is to define a set of PLC addresses that need to be read or written on a regular basis. I had an xml definition that looked a lot like this: <plc-link ip-address=“192.168.1.5" port='2222' plc-type='slc5' protocol=‘CSP’ optimize=‘yes'> <scan-set period='45' optimize='false' description='Questionable Addresses-isolated for non-optimized debug'> <read-from-plc> <tag name="Prod.Temp.East" address="F8:7"/> <tag name="Prod.Temp.West" address="F8:6"/> </read-from-plc> </scan-set> <scan-set period="2.1" description="Shift Enable"> <write-to-plc> <tag name="Enable.Shift" address="B3:0/5"/> <tag name="Enable.ProductionShift" address="B3:59/5"/> </write-to-plc> </scan-set> <scan-set period='45' description='SMT Area Operating Data'> <read-from-plc> <tag name="L1.DowntimeMinutes.Shift" address="N100:0"/> <tag name='L1.Uptime' address='N102:0'/> <tag name='L1.UtilizationDowntime' address='N102:1'/> <tag name='L1.TotalUtilizationTime' address='N102:2'/> <tag name='L1.NonUtilizationDowntime' address='N102:3'/> <tag name='L1.NumberOfBreakdowns' address='N102:4'/> <tag name='L1.NumberOfBoards' address='N102:5'/> <tag name='L1.Utilization' address='F108:1'/> <tag name='L1.PercentDowntime' address='F108:3'/> <tag name='L1.MaintenanceResponseAverage' address='F108:4'/> <tag name="L1.MaintenanceResponseMinutes.Shift" address="N100:1"/> <tag name="L1.MaterialShort.Shift" address="N113:0"/> <tag name="L1.Setup.Shift" address="N113:2"/> <tag name="L1.LunchBreak.Shift" address="N113:3"/> <tag name="L1.Meetings.Shift" address="N113:4"/> <tag name="L1.Idle.Shift" address="N113:5"/> <tag name="L1.PM.Shift" address="N113:7"/> <tag name="L1.Breakdown.Shift" address="N113:8"/> <tag name="L1.Other.Shift" address="N113:10"/> </read-from-plc> </scan-set> </plc-link> The client of this scanner gets a reference to a “SharedDataStore” in which they can look up tag values by name, and write data to the tagged value. The data is synchronized to the PLC according to the ‘period’ that’s set on the scan-set. This worked out well, because it represents the intentions of the programmer directly (e.g. read this block of tags every 45 seconds), and then the system can optimize the read/write methods to fulfill that intention. For instance, you can look at the block and coalesce any sequential addresses into a block read rather than a single-address read, if the PLC supports that. On the AB SLC, that turned out to be a quite useful optimization. It’s also possible to flag situations where the scanset targets are not fulfilled (e.g. it took 60 seconds to read the block when the scanset called for 45 seconds). Just something to think about. Cheers, Greg Trasuk > On Dec 18, 2018, at 12:25 PM, Christofer Dutz <christofer.d...@c-ware.de> wrote: > > Hi all, > > right now the S7 driver is the only driver that makes use of dynamically rewriting a message. > > This is for example needed in order to allow reading of large read requests and having the driver split that up into multiple messages. > Ideally it would not only split up the messages, but also optimize the requests by rearranging the order of requested elements or completely rewriting the requests (Replace reading of 10 separate bit addresses by for example simply reading 3 bytes) > > As I’m currently defining some quite generic methods in the protocol-model, I think it would be a better Idea to solve this problem in a more generic way as I believe most optimizations would apply to most protocols. > Also would I like to have this configurable: Here I would imagine that we provide what we think is a good implementation for the individua driver but allow overriding processors by adding alternate implementations to the class-path and allowing to override the processor in the connection-string. > > I was thinking of using the same mechanism (Java Service Lookup) as we are using for the drivers themselves. > > What do you think? > > Chris