>>Is so, the job would not be able to run more CPs in parallel at any point in 
>>time than there are LCPs in the node (highds, mediums, and unparked lows), 
>>right?

>I am confused by this question. Of course, regardless of the other conditions 
>that you've asked about, a job can never run more tasks in parallel "at the 
>same point in time" than there are Logical CPs available, which can't be more 
>than the number of physical CPs available at that instant.




Sorry for not being clear enough.


I understand that with HiperDispatch on, WLM creates "CP nodes" based on the 
topology. The LPAR in question has 12 LCPs, 4 highs, 1 medium and 7 lows. This 
would lead to 2 nodes with 2 highs each, the medium in one of the nodes and the 
lows spread (3 & 4?).


WLM then creates a work unit queue (WUQ, or dispatcher queue) for each node. 
Work units (WU) are then assigned to a node, so only CPs from that node will 
select WUs from that node. I'm assuming that the assignement to a node is an 
address space attribute and not a WU atttibute (would neglect the positice 
effect processor caches for which HiperDispatch was invented).


If all tasks of an address space are in the same node, there can never be more 
WU running in parallel than there are (L)CPs in the node. In my example, 2 
highs plus the medium (for one node), up to 4 unparked (if unparked) low CPs. 
That would explain why RMF sees often CP delay even though there is plenty of 
spare capacity.


If all this is true, would givint the job higher "prioritiy" really help?




Caution: This is the first time I care to look into this in that depth. Maybe 
I'm all wrong. Happy to learn the truth.


--
Peter Hunkeler



----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to