Sure I could, in my copious spare time... and of course the reporting would
be user-optional.

Yes, "enough" reduction of elapsed time with "reasonable" CPU increase is
the ultimate goal of all of this shilly-shallying around.  All I have been
asking is if there were any actual measurements that could be used to make
the process a little less vague.

Peter

-----Original Message-----
From: Tom Schmidt [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 09, 2007 3:55 PM
To: [email protected]
Subject: Re: How to measure actual usage of BLSR buffers?

<Snipped>
You could still write your own BLSR subsystem.  Of course, your version of 
BLSR will have higher CPU cost since you intend to do all of that extra work

and a lot of it will be inline (during GET/PUT/CLOSE).  Maybe your BLSR
could make the reporting be a user option so the user can decide?  
 
The statistics that I have always been most interested in having from BLSR
vs. NSR have been the step CPU and the elapsed time.  If the elapsed time 
improved 'enough' and the CPU was 'reasonable' then I could move on to 
other 'opportunities'.  

This message and any attachments are intended only for the use of the addressee 
and
may contain information that is privileged and confidential. If the reader of 
the 
message is not the intended recipient or an authorized representative of the
intended recipient, you are hereby notified that any dissemination of this
communication is strictly prohibited. If you have received this communication in
error, please notify us immediately by e-mail and delete the message and any
attachments from your system.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to