Don,
   Prior to L8 TCBs, DB2 SQL requests executed on threads. Each thread was a 
seperate TCB. So the QR TCB and each thread TCB could execute in parallel. 
  
Terry Draper
zSeries Performance Consultant
[email protected]
mobile:  +966 556730876

--- On Wed, 19/8/09, Don Deese <[email protected]> wrote:


From: Don Deese <[email protected]>
Subject: Re: Can CICS region share more than one processor
To: [email protected]
Date: Wednesday, 19 August, 2009, 3:48 PM


Hi Mohammad,

I don't understand the relevance of your question within the context of Dr. 
Merrill's posting.  Dr. Merrill was illustrating the concurrent execution of 
multiple TCBs (both the QR TCB and L8 TCBs) that would not have been possible 
prior to the OTE design.  It is somewhat irrelevant as to whether DB2 CPU time 
was charged to a QR TCB or to an application TCB in past.  The fact that the 
DB2 processing would have been executed in series off QR TCBs, but now executes 
in parallel with L8 TCBs is the important point that Dr. Merrill was making.  
Keep in mind that Dr. Merrill was responding to the OP query "Can CICS region 
share more than one processor".

FWIW, I have data from CPExpert users that show 10, 20, 30 or more L8 TCBs 
executing concurrently and using more than 100% of CPU (namely, they are using 
multiple CPUs concurrently for more than 100% of the time).  This would not 
have been possible prior to the OTE design.

As Dr. Merrill pointed out, there are 22 TCB types with CICS/TS 4.1, and 
several of these TCB types can have multiple TCBs executing concurrently.  The 
number of current TCBs can be controlled by the MAXxxxTCBS in the SIT or in the 
JVMPROFILE Resource Definition.  There are, of course, limiting factors 
inherent in the environment (for example, CICS monitors  the amount of 
available MVS storage and will not attach new TCBs from the JVM TCB pool if 
storage is severely constrained).

Regards,

Don

******
Don Deese, Computer Management Sciences, Inc.
Voice: (804) 776-7109  Fax: (804) 776-7139
http://www.cpexpert.org
******



At 09:16 AM 8/13/2009, you wrote:
> Thanks Dr. Merrill for your illustrative example but I do have a question 
> about
> it. Since L8 TCBs are used to execute DB2 code as well, what part of 10,298
> seconds is for DB2 ? Since DB2 related code never executed on QR TCB
> anyway, that portion of CPU usage is moot for this discussion. The real
> question is how much of this CPU now runs on L8 TCB which used to run on QR
> TCB due to the aggressive OTE exploitation ?
> Regards
> Mohammad
> 
> 
> On Wed, 12 Aug 2009 15:18:23 -0500, Barry Merrill <[email protected]> wrote:
> 
> >In the old days, a CICS subsystem's capacity was limited by
> >the amount of CPU TCB time needed for that single QR TCB.
> >
> >Based on my analysis when OTE was brand new, of the CPU time
> >consumed by each of these new CICS TCBs, I planned this post
> >to argue that going to OTE didn't help much, because most of
> >the CICS CPU time was still being spent under the QR TCB.
> >
> >I could NOT have been more wrong!
> >
> >Analyzing new CICS/TS 4.1 Open Beta data from a VERY
> >aggressive OTE exploiter site shows (from their
> >SMF 110, subtype 2 Dispatcher Statistics segments,
> >MXG CICDS and CICINTRV datasets):
> >
> >Total TCB CPU in Dispatcher Records  = 13,080 seconds
> >Total TCB CPU in QR TCB              =  2,776 seconds
> >Total TCB CPU in L8 TCB              = 10,298 seconds
> >Total TCB CPU in all other TCBs      =      6 seconds
> >
> >Aha, you say, OTE still doesn't help; the CPU time just moved
> >from the QR TCB to the L8 TCB, so the capacity limit just moved
> >from one TCB to the other, right?
> >
> >Wrong again.
> >
> >While the QR TCB can attach only a single TCB, these new TCBs
> >can attach multiple TCBs; in fact, the SMF data shows that
> >the L8 TCB attached a maximum of 22 TCBs, each of which
> >is a separate dispatchable unit.
> >
> >So, it REALLY does look like that these multiple OTE TCBs
> >do eliminate the old "one-TCB" CICS capacity limitations,
> >and does indeed spread your CICS time across MANY TCBs.
> >
> >(Total SRB time in the Dispatcher Records was only 65 seconds.)
> >
> >Barry Merrill
> >
> >Herbert W. Barry Merrill, PhD
> >President-Programmer
> >Merrill Consultants
> >MXG Software
> >10717 Cromwell Drive
> >Dallas, TX 75229
> >
> > [email protected]
> > http://www.mxg.com
> >               admin questions:       [email protected]
> >               technical questions:  [email protected]
> > tel: 214 351 1966
> > fax: 214 350 3694
> >

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to