Re: [U2] Improving performance
On 10/11/12 19:54, Jeff Butera wrote: > In an attempt to improve performance, I've taken to replicating the > spouse field in the DONOR record by using an update trigger on PERSON to > ensure spouse is updated in DONOR as well. By replicating the spouse > in DONOR, I eliminate one TRANS in each field and cut my 64 PERSON > reads to 32 reads. > > That said, I'm looking for other means - this is one place where Pick > isn't my friend as we often have reports written out of various tables > and need to use 32+ TRANS statements to pull in PERSON demographic > info. Any insight appreciated. Don't quote me on this, you'll need someone at Rocket to confirm it, but as far as I know, TRANS *does* cache the records it's read. So if you make multiple references to SPOUSE, unless you trans a different file or record in the interim, multiple consecutive accesses to the same record won't need to make multiple reads. Cheers, Wol ___ U2-Users mailing list U2-Users@listserver.u2ug.org http://listserver.u2ug.org/mailman/listinfo/u2-users
Re: [U2] Improving performance
With Informer, I think you have a process at start. One idea is to use that process slot to launch a Basic program to collect all of your information then write one record per each Person to a work file. Setup your dictionary with simple amcs off of that work file and report from it. Let the Basic front end do all the work at the start. You'd have to cater for port-specific work files or such so that multiple users can run the report at the same time, but that's easy enough in the process at start as well - if the file exists, clear it, if it doesn't, then create it. HTH, Laura -Original Message- From: u2-users-boun...@listserver.u2ug.org [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Jeff Butera Sent: Saturday, November 10, 2012 5:41 PM To: u2-users@listserver.u2ug.org Subject: Re: [U2] Improving performance David: I have done many of this tricks in the past but should've mentioned the we're using Entrinsik Informer for the reporting end - so I don't have control to do things like caching results in common, etc. Thanks for your feedback On 11/10/2012 04:03 PM, David A. Green wrote: > Reports have the following: > > 1. Input report parameters (Use whatever tool you want to have as an > interface with your users) 2. Select needed data (I use UniQuery here) > 3. Manipulate data (I write a UniBasic program to populate a temp file > with the combined data from as many tables as I need to use) 4. Sort > data (Again using UniQuery to Sort my temp file) 5. Output data (Use > whatever reporting tool fits best, web, xml, printer, terminal, email, > fax, etc.) > > The trick is to keep each of the above steps separate. > > You can capsulate the "Report" steps in a Paragraph. > > --- > > Another method if your just using UniQuery is to create an UniBasic > subroutine with a named common area that keeps track of the > LAST.PERSON.ID and the LAST.PERSON.REC. Only read the PERSON file if > the LAST.PERSON.ID doesn't equal the current @ID. > > --- > > Another way is: if you are getting most of your report data from > PERSON then create the I-Desc to get the needed DONOR data and use > PERSON as the base file. > > David A. Green > (480) 813-1725 > DAG Consulting > > -Original Message- > From: u2-users-boun...@listserver.u2ug.org > [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Jeff Butera > Sent: Saturday, November 10, 2012 12:55 PM > To: U2 Users List > Subject: [U2] Improving performance > > At the outset, I'm a lover of Pick/MV but also have experience with > 1NF (mostly mysql/postgresql). Here's my delimma: > > For the sake of simplicity, let's say I have two tables with the > following data attributes: > > PERSON: id, first_name, last_name, email, phone, spouse > > DONOR: id, given_amt, pledge_amt > > The ID for PERSON and DONOR are same, and spouse is a X-pointer to > another PERSON record (if populated). Often our fundraisers write > reports out of DONOR and often want spouse info (name, email, phone). > This is typically accomplished with I-desc in DONOR such as: > > spouse_first: > TRANS('PERSON',TRANS('PERSON',@ID,'spouse','X'),'first_name','X') > > spouse_last: > TRANS('PERSON',TRANS('PERSON',@ID,'spouse','X'),'last_name','X') > > spouse_email: > TRANS('PERSON',TRANS('PERSON',@ID,'spouse','X'),'email','X') > > spouse_phone: > TRANS('PERSON',TRANS('PERSON',@ID,'spouse','X'),'phone','X') > > Thus, for each DONOR record in my report if I want the four fields > above I've got to perform 8 reads of the PERSON table. Sure, caching > will improve this performance - but let's forget that for the moment. > So for > 8 DONOR records I'm doing 64 PERSON reads. > > Am I correct about how TRANS is working in these cases? > > In an attempt to improve performance, I've taken to replicating the > spouse field in the DONOR record by using an update trigger on PERSON to > ensure spouse is updated in DONOR as well. By replicating the spouse > in DONOR, I eliminate one TRANS in each field and cut my 64 PERSON > reads to 32 reads. > > That said, I'm looking for other means - this is one place where Pick > isn't my friend as we often have reports written out of various tables > and need to use 32+ TRANS statements to pull in PERSON demographic > info. Any insight appreciated. > > -- > Jeff Butera, PhD > Associate Director for Applications and Web Services
Re: [U2] Improving performance
David: I have done many of this tricks in the past but should've mentioned the we're using Entrinsik Informer for the reporting end - so I don't have control to do things like caching results in common, etc. Thanks for your feedback On 11/10/2012 04:03 PM, David A. Green wrote: Reports have the following: 1. Input report parameters (Use whatever tool you want to have as an interface with your users) 2. Select needed data (I use UniQuery here) 3. Manipulate data (I write a UniBasic program to populate a temp file with the combined data from as many tables as I need to use) 4. Sort data (Again using UniQuery to Sort my temp file) 5. Output data (Use whatever reporting tool fits best, web, xml, printer, terminal, email, fax, etc.) The trick is to keep each of the above steps separate. You can capsulate the "Report" steps in a Paragraph. --- Another method if your just using UniQuery is to create an UniBasic subroutine with a named common area that keeps track of the LAST.PERSON.ID and the LAST.PERSON.REC. Only read the PERSON file if the LAST.PERSON.ID doesn't equal the current @ID. --- Another way is: if you are getting most of your report data from PERSON then create the I-Desc to get the needed DONOR data and use PERSON as the base file. David A. Green (480) 813-1725 DAG Consulting -Original Message- From: u2-users-boun...@listserver.u2ug.org [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Jeff Butera Sent: Saturday, November 10, 2012 12:55 PM To: U2 Users List Subject: [U2] Improving performance At the outset, I'm a lover of Pick/MV but also have experience with 1NF (mostly mysql/postgresql). Here's my delimma: For the sake of simplicity, let's say I have two tables with the following data attributes: PERSON: id, first_name, last_name, email, phone, spouse DONOR: id, given_amt, pledge_amt The ID for PERSON and DONOR are same, and spouse is a X-pointer to another PERSON record (if populated). Often our fundraisers write reports out of DONOR and often want spouse info (name, email, phone). This is typically accomplished with I-desc in DONOR such as: spouse_first: TRANS('PERSON',TRANS('PERSON',@ID,'spouse','X'),'first_name','X') spouse_last: TRANS('PERSON',TRANS('PERSON',@ID,'spouse','X'),'last_name','X') spouse_email: TRANS('PERSON',TRANS('PERSON',@ID,'spouse','X'),'email','X') spouse_phone: TRANS('PERSON',TRANS('PERSON',@ID,'spouse','X'),'phone','X') Thus, for each DONOR record in my report if I want the four fields above I've got to perform 8 reads of the PERSON table. Sure, caching will improve this performance - but let's forget that for the moment. So for 8 DONOR records I'm doing 64 PERSON reads. Am I correct about how TRANS is working in these cases? In an attempt to improve performance, I've taken to replicating the spouse field in the DONOR record by using an update trigger on PERSON to ensure spouse is updated in DONOR as well. By replicating the spouse in DONOR, I eliminate one TRANS in each field and cut my 64 PERSON reads to 32 reads. That said, I'm looking for other means - this is one place where Pick isn't my friend as we often have reports written out of various tables and need to use 32+ TRANS statements to pull in PERSON demographic info. Any insight appreciated. -- Jeff Butera, PhD Associate Director for Applications and Web Services Information Technology Hampshire College 413-559-5556 ___ U2-Users mailing list U2-Users@listserver.u2ug.org http://listserver.u2ug.org/mailman/listinfo/u2-users ___ U2-Users mailing list U2-Users@listserver.u2ug.org http://listserver.u2ug.org/mailman/listinfo/u2-users -- Jeff Butera, PhD Associate Director for Applications and Web Services Information Technology Hampshire College 413-559-5556 ___ U2-Users mailing list U2-Users@listserver.u2ug.org http://listserver.u2ug.org/mailman/listinfo/u2-users
Re: [U2] Improving performance
A join is a join in any DBMS. In SQL you'd do something like this: SELECT DONOR.id, PERSON.first_name, PERSON.last_name, DONOR.given_amt, DONOR.pledge_amt, PERSON.email FROM DONOR INNER JOIN PERSON ON DONOR.id=PERSON.id WHERE PERSON.spouse_last='Smith' ORDER BY PERSON.last_name BY PERSON.first_name We trust that a RDBMS will cache each person related to each donor, just as in MV we trust that the PERSON item won't go out of memory while we are processing a given DONOR item. Further, if we're talking about polygamists and two Donor Persons have a single Donor spouse, we trust that the translate will keep the spouse in memory for as long as possible as well. So this isn't a situation where MV should or should not be one's friend - every platform faces the same challenge. In your example where the ID is the same in both items, it's certain barring some .0001% possibility (time slice issue where some other process consumes all memory ... not really possible) that the Person won't be in memory for the Donor. But your question becomes more valid when, for example, you're sorting pledges by date, and you hit the same donor/person/spouse at many different times throughout the rendering process. If you have 80,000 pledge records and record 1 references Donor1, then you get to the last record which also happens to reference Donor1 after so many other records have consumed cache and forced flushing, will Donor1 and related records still be in memory? I believe the answer would be that it depends on how much memory is available, how big those records are, and what each MV implementation does with data that it caches like this. Only an engineer at Rocket can answer the question for current Universe and Unidata platforms. Not much of an answer, just refining the question... HTH T > From: Jeff Butera > > At the outset, I'm a lover of Pick/MV but also have experience with > 1NF (mostly mysql/postgresql). Here's my delimma: > > For the sake of simplicity, let's say I have two tables with the following > data attributes: > > PERSON: id, first_name, last_name, email, phone, spouse > > DONOR: id, given_amt, pledge_amt > > The ID for PERSON and DONOR are same, and spouse is a X-pointer to > another PERSON record (if populated). Often our fundraisers write > reports out of DONOR and often want spouse info (name, email, > phone). > This is typically accomplished with I-desc in DONOR such as: > > spouse_first: > TRANS('PERSON',TRANS('PERSON',@ID,'spouse','X'),'first_name','X') > > spouse_last: > TRANS('PERSON',TRANS('PERSON',@ID,'spouse','X'),'last_name','X') > > spouse_email: > TRANS('PERSON',TRANS('PERSON',@ID,'spouse','X'),'email','X') > > spouse_phone: > TRANS('PERSON',TRANS('PERSON',@ID,'spouse','X'),'phone','X') > > Thus, for each DONOR record in my report if I want the four fields > above I've got to perform 8 reads of the PERSON table. Sure, caching > will improve this performance - but let's forget that for the moment. > So for > 8 DONOR records I'm doing 64 PERSON reads. > > Am I correct about how TRANS is working in these cases? > > In an attempt to improve performance, I've taken to replicating the > spouse field in the DONOR record by using an update trigger on > PERSON to > ensure spouse is updated in DONOR as well. By replicating the spouse > in DONOR, I eliminate one TRANS in each field and cut my 64 > PERSON reads to 32 reads. > > That said, I'm looking for other means - this is one place where Pick > isn't my friend as we often have reports written out of various tables > and need to use 32+ TRANS statements to pull in PERSON demographic > info. Any insight appreciated. ___ U2-Users mailing list U2-Users@listserver.u2ug.org http://listserver.u2ug.org/mailman/listinfo/u2-users
Re: [U2] Improving performance
Reports have the following: 1. Input report parameters (Use whatever tool you want to have as an interface with your users) 2. Select needed data (I use UniQuery here) 3. Manipulate data (I write a UniBasic program to populate a temp file with the combined data from as many tables as I need to use) 4. Sort data (Again using UniQuery to Sort my temp file) 5. Output data (Use whatever reporting tool fits best, web, xml, printer, terminal, email, fax, etc.) The trick is to keep each of the above steps separate. You can capsulate the "Report" steps in a Paragraph. --- Another method if your just using UniQuery is to create an UniBasic subroutine with a named common area that keeps track of the LAST.PERSON.ID and the LAST.PERSON.REC. Only read the PERSON file if the LAST.PERSON.ID doesn't equal the current @ID. --- Another way is: if you are getting most of your report data from PERSON then create the I-Desc to get the needed DONOR data and use PERSON as the base file. David A. Green (480) 813-1725 DAG Consulting -Original Message- From: u2-users-boun...@listserver.u2ug.org [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Jeff Butera Sent: Saturday, November 10, 2012 12:55 PM To: U2 Users List Subject: [U2] Improving performance At the outset, I'm a lover of Pick/MV but also have experience with 1NF (mostly mysql/postgresql). Here's my delimma: For the sake of simplicity, let's say I have two tables with the following data attributes: PERSON: id, first_name, last_name, email, phone, spouse DONOR: id, given_amt, pledge_amt The ID for PERSON and DONOR are same, and spouse is a X-pointer to another PERSON record (if populated). Often our fundraisers write reports out of DONOR and often want spouse info (name, email, phone). This is typically accomplished with I-desc in DONOR such as: spouse_first: TRANS('PERSON',TRANS('PERSON',@ID,'spouse','X'),'first_name','X') spouse_last: TRANS('PERSON',TRANS('PERSON',@ID,'spouse','X'),'last_name','X') spouse_email: TRANS('PERSON',TRANS('PERSON',@ID,'spouse','X'),'email','X') spouse_phone: TRANS('PERSON',TRANS('PERSON',@ID,'spouse','X'),'phone','X') Thus, for each DONOR record in my report if I want the four fields above I've got to perform 8 reads of the PERSON table. Sure, caching will improve this performance - but let's forget that for the moment. So for 8 DONOR records I'm doing 64 PERSON reads. Am I correct about how TRANS is working in these cases? In an attempt to improve performance, I've taken to replicating the spouse field in the DONOR record by using an update trigger on PERSON to ensure spouse is updated in DONOR as well. By replicating the spouse in DONOR, I eliminate one TRANS in each field and cut my 64 PERSON reads to 32 reads. That said, I'm looking for other means - this is one place where Pick isn't my friend as we often have reports written out of various tables and need to use 32+ TRANS statements to pull in PERSON demographic info. Any insight appreciated. -- Jeff Butera, PhD Associate Director for Applications and Web Services Information Technology Hampshire College 413-559-5556 ___ U2-Users mailing list U2-Users@listserver.u2ug.org http://listserver.u2ug.org/mailman/listinfo/u2-users ___ U2-Users mailing list U2-Users@listserver.u2ug.org http://listserver.u2ug.org/mailman/listinfo/u2-users
[U2] Improving performance
At the outset, I'm a lover of Pick/MV but also have experience with 1NF (mostly mysql/postgresql). Here's my delimma: For the sake of simplicity, let's say I have two tables with the following data attributes: PERSON: id, first_name, last_name, email, phone, spouse DONOR: id, given_amt, pledge_amt The ID for PERSON and DONOR are same, and spouse is a X-pointer to another PERSON record (if populated). Often our fundraisers write reports out of DONOR and often want spouse info (name, email, phone). This is typically accomplished with I-desc in DONOR such as: spouse_first: TRANS('PERSON',TRANS('PERSON',@ID,'spouse','X'),'first_name','X') spouse_last: TRANS('PERSON',TRANS('PERSON',@ID,'spouse','X'),'last_name','X') spouse_email: TRANS('PERSON',TRANS('PERSON',@ID,'spouse','X'),'email','X') spouse_phone: TRANS('PERSON',TRANS('PERSON',@ID,'spouse','X'),'phone','X') Thus, for each DONOR record in my report if I want the four fields above I've got to perform 8 reads of the PERSON table. Sure, caching will improve this performance - but let's forget that for the moment. So for 8 DONOR records I'm doing 64 PERSON reads. Am I correct about how TRANS is working in these cases? In an attempt to improve performance, I've taken to replicating the spouse field in the DONOR record by using an update trigger on PERSON to ensure spouse is updated in DONOR as well. By replicating the spouse in DONOR, I eliminate one TRANS in each field and cut my 64 PERSON reads to 32 reads. That said, I'm looking for other means - this is one place where Pick isn't my friend as we often have reports written out of various tables and need to use 32+ TRANS statements to pull in PERSON demographic info. Any insight appreciated. -- Jeff Butera, PhD Associate Director for Applications and Web Services Information Technology Hampshire College 413-559-5556 ___ U2-Users mailing list U2-Users@listserver.u2ug.org http://listserver.u2ug.org/mailman/listinfo/u2-users