Assuming that there is a good reason to copy all the records like this, then you have a number of options, of which the best is probably something akin to this:
If you need to use a JQL SELECT to identify the records, then do not do this with an external EXECUTE or a SELECT before starting your program. Read how to use the jBC interface to jQL, which will allow you to use a filter such that you do not first read the record and select it into a list, then traverse the file again. If you do not really need jQL then just use SELECT and READNEXT/READ, and filter your items as you traverse the file. This is essentially the same as a JQL in JBC traverse but avoids the overhead of processing dictionary items. For instance if your selection is based on something trivial like "attr 3 is set to 'Y'" then don't bother with jQL. Next, if your job will be an offline batch job, with no other processes accessing the file, then open the file in read only exclusive mode. This allows the JEDI code to skip setting any read locks on the source file and just assumes that the file will not be modified while you are traversing it. This is much faster than normal reads. You cannot do this if any other process may write to the file while you are reading it. If the JEDI driver sees that the file does not have write permissions, it should automatically open the file in this mode. However, it has been quite some time since I had anything to do with that code, so you should make sure that this is still true. I doubt that using more than one 'thread' to do the read and write will get you too much in the way of performance because the processes can compete for write locks and the write order will defeat any OS level write optimisations. It is possible that you will get better throughput with say two processes and less throughput with five processes because of lock contention. Although the sequential traversal of your input file will cause the file to come in to memory (especially if your OS has been tuned to recognize sequential read patterns and to perform read-ahead at a low level), you may find that the read process can be improved by forcing the file in to cache before you start (this depends on memory pressure from the rest of the system though). Something like using the dd command to read the raw file in large blocks and send the output to /dev/null. Unless you are forced to do so because of company policy etc. Do not use the F.READ F.WRITE subroutines as they are going to be very much slower than direct jBC code. But I think the F.READ type calls also do other things such as audit trails, so you will have to refer to your corporate policies on that. Finally, if you are copying all the records, then if you can gain exclusive access for a short period, just use the native cp command to copy the raw data file. In some instances, a dd will perform better than that with judicious selection of block sizes. So, given your pseudo code above, it seems that you only need fairly simple filters and shoudl use jBC code to SELECT, READNEXT, CHECK ID (check anything else) READ, WRITE Jim On Thu, May 19, 2016 at 3:16 PM, Paweł Birgiel <[email protected]> wrote: > Hi. I have a task to make a service which will copy records from one table > to another as fast as possible (using at most 5 agents). I have a problem > with choosing the best way to do that. > > The most obvious solution seems to be using one READ and one WRITE (they > appear to be a bit faster than F.READ and F.WIRTE) per each Y.ID in my > main routine (after selecting list of records in SELECT routine). But it's > still a bit slow way and I'm not sure I'm using service capabilities at one > hundred percent. Another way is to use EXECUTE 'COPY FROM TABLE.A TO > TABLE.B' for each record separately in main routine, but it doesn't seem to > be faster. > > The most frustrating part is, when I'm using just jQL syntax in my shell > and type something like: COPY FROM TABLE.A TO TABLE.B ALL, then sometimes > it's even faster than my service! > > I was also thinking about another solution: > > SELECT TABLE.A WITH @ID LIKE LOC... > COPY FROM TABLE.A TO TABLE.B > > This way we can copy multiple records at once. Maybe I should send larger > chunks of id-s to the main routine of my service and then copy all of them? > I made some tries and it doesn't seem to be much faster, sometimes it's > even slower. > > Have you got any piece of advice for me? > > -- > -- > IMPORTANT: T24/Globus posts are no longer accepted on this forum. > > To post, send email to [email protected] > To unsubscribe, send email to [email protected] > For more options, visit this group at > http://groups.google.com/group/jBASE?hl=en > > --- > You received this message because you are subscribed to the Google Groups > "jBASE" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > For more options, visit https://groups.google.com/d/optout. > -- -- IMPORTANT: T24/Globus posts are no longer accepted on this forum. To post, send email to [email protected] To unsubscribe, send email to [email protected] For more options, visit this group at http://groups.google.com/group/jBASE?hl=en --- You received this message because you are subscribed to the Google Groups "jBASE" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
