On Tue, Apr 25, 2017 at 10:12 AM, Tim Nevels via 4D_Tech < [email protected]> wrote:
> Here’s an idea. I’m assuming all the record processing is done in a single > process. How much work would it be to modify the code so that it spawns > multiple processes that can run at the same time? I don’t know the code, > but maybe you could pass that big BLOG off to a method in another process > and let it do the work. Have 3-4 of these processes all working at the same > time. I wonder if that would give you performance boost. Tim: It's looking like the data format is causing the performance hit so I've love to split this off across processes or CPU or workstations, for that matter. The problem I'd hit trying that now is that the BLOB's contain data from multiple tables and they also contain multiple updates for changes to a given table so I really have to unpack the BLOB "to find out what's in it". That's a fair amount of code to right but, to your point, there could be a big payoff if I could split it across processes. In contrast, my thinking is that there's a better payoff by simplifying the encode/decode process. In addition to an anticipated performance boost, I won't need to run special code just to view the data. Right now, I've got to run another set of routines to display the data in a human readable form. All in all, it's a very versatile approach to packaging data but it *is* a pain in the ass to work with. -- Douglas von Roeder 949-336-2902 ********************************************************************** 4D Internet Users Group (4D iNUG) FAQ: http://lists.4d.com/faqnug.html Archive: http://lists.4d.com/archives.html Options: http://lists.4d.com/mailman/options/4d_tech Unsub: mailto:[email protected] **********************************************************************

