The queues aren't the issue - we're happy to have them all run at the same
priority.
The application is designed to use a producer : consumer pattern where
recovery process(es) act as the producer(s) and this is the consumer.

Use of the queues means that there's no I/O - this is acting as an MITM -
it's deriving some additional data from the message and passing it through
to a TCP process that shunts it all off mainframe. It has a side benefit
of isolating the TP system from this post processing work as the only link
is the queue - and we turn off processing if it fills up (different topic
entirely).

It's this quote that's driving this problem
"z/OS handles many address spaces very well."

It's far less wonderful at handling them (at least in our experience) when
the machine is busy.
And ours is. "Buy more MIPS" isn't going to fly, so we're looking for
something creative.

There are a fixed (albeit high) number of instances in our model, so we
don't have to get fancy with dynamic startup.
The attach/manage restart isn't logic particularly complicated - there's
no "hidden" issues there - and everything we do in this space is written
re-entrant, so the code isn't too scary.

The only uncertainty really is whether multiple queues is overkill.
We're choosing (at least initially) to err on the side of caution - single
point of failure and all that.
As may TCBs as we can pack in there still looks like the way to go. It
will be an interesting exercise to see how many copies we can fit before
it goes pear shaped. I remember when we played with C a while ago the
limit was about 63 forks before the application lost the plot. I'm hoping
we can do better here...

I'm not sure why limiting the instances to the number of CPs makes any
sense.
Perhaps if this were the ~only~ thing running. What am I missing?





From:   Andy Coburn <[email protected]>
To:     [email protected]
Date:   04/10/2011 05:23 PM
Subject:        Re: Address space proliferation
Sent by:        IBM Mainframe Assembler List
<[email protected]>



>Andy
>Are you able to post your response in this list


Here is my original post. Sorry about the confusion.

There are some other considerations here, I think.

Is there already coded and tested a maintask program capable of attaching
from 1 - n subtasks and managing and restarting them in case of abends?
The
issue of DD names must be dealt with because each instance would have to
have a unique one in order to run in the same address space. Operator
commands to monitor the tasks would be required.

The dispatching priority of each task is controlled by the maintask and
without some very sophisticated logic each task would probably retain its
original priority (set on the ATTACHX macro). CHAP could be used but
knowing
which task to CHAP would be problematic.

This is not the simplest of programs to write but it's not impossible
either. If it doesn't exist it will have to be created in order for
multiple
instances of the original stand alone program be run in a single address
space.

Limiting the number of instances to the number of CPs is an interesting
thought. However, if an instance does any sort of wait (e.g. for I/O) then
there's no reason that one instance per CP makes a lot of sense.

Speaking of limits, there are, of course, limits to everything. In one
address space there is a limit to the number of TCBs that can be attached.
This limit is quite high but there isn't an infinite amount of SQA. CSA
(as
opposed to ECSA) is quite limited and some is used for each address space.
These are just some of the trade-offs that should be considered.


If there becomes a need to establish a different priority for each POSIX
queue then WLM is the way to do it because CHAP can't increase the
dispatching priority of the address space only of the task. However, WLM
does not monitor tasks within an address space, only the address space
itself.

In the best of all possible worlds having a program which dynamically
starts
processing address spaces (via MGCRE for instance) and each address space
supporting multiple TCBs would give the best flexibility. Coding all of
this
might be more than anyone wants to do, however.

Andy Coburn

Reply via email to