I am just about finished implementing a set of subroutines that will utilize
the new(ish) "unbounded table" feature of Enterprise COBOL to support a "pseudo
dynamic capacity" table. It works very much how I could imagine it might be
implemented within the language itself. Here's a brief overview of how it
works, and why I believe it's not as bad as you make it out to be.
LINKAGE SECTION.
01 CREDIT-TABLE.
05 CT-ROW-CNT PIC S9(9) COMP.
05 CREDIT-ENTRY OCCURS 0 TO UNBOUNDED
DEPENDING ON CT-ROW-CNT
INDEXED BY CREDIT-SUB
PIC X(125).
*> do the following one time
PERFORM CREATE-CREDIT-TABLE
*> do the following each time you want to insert a new row
PERFORM INCREMENT-CREDIT-TABLE
SET CREDIT-SUB TO CT-ROW-CNT
MOVE TRANSACTION-RECORD TO CREDIT-ENTRY (CREDIT-SUB)
*> do the following to reset the table back to "no rows"
PERFORM RESET-CREDIT-TABLE
*> do the following to release the storage for the table.
*> not really needed unless you are using the table in a called subroutine
*> and want to make the storage available once the called routine has
exited.
PERFORM RELEASE-CREDIT-TABLE
here are the relevant procedures that actually invoke the 'dynamic table
routines':
CREATE-CREDIT-TABLE.
>>callinterface dll
call 'dynamic_table_create'
using value length of CREDIT-ENTRY(1)
value 20
reference address of CREDIT-TABLE
>>callinterface
exit paragraph.
INCREMENT-CREDIT-TABLE.
>>callinterface dll
call 'dynamic_table_update_size'
using reference address of CREDIT-TABLE
value +1
>>callinterface
exit paragraph.
RESET-CREDIT-TABLE.
>>callinterface dll
call 'dynamic_table_reset'
using reference address of CREDIT-TABLE
>>callinterface
exit paragraph.
RELEASE-CREDIT-TABLE.
>>callinterface dll
call 'dynamic_table_release'
using reference address of CREDIT-TABLE
>>callinterface
exit paragraph.
'dynamic_table_create' is a routine that uses CEEGTST to allocate enough space
to hold:
- a 20 byte "control block" (invisable to the caller)
- a fullword "current capacity" field (represented as CT-ROW-CNT in this
example)
- the number of rows specified by parameter 2 (20, in this case).
Note that the "current capacity" field is initially set to 0, even though there
is enough space to hold 20 rows. This means that as you call
'dynamic_table_update_size' to "add a new row" to the table, it does not call
storage management until the time when the current capacity would exceed the
"current physical capacity" (a value stored in the control block mentioned
above).
Even in the case where it does increase the actual allocated capacity, it does
not do it "one row at a time". Rather, it doubles the current physical
capacity and "reallocates" (using CEECZST) the storage to the new value. This
may or may not actually cause LE storage control to reallocate out of a
different area (copying the existing data from the old allocated area). If
there is enough room already it does nothing except increase the amount
reserved for your allocation. And even then, LE has already allocated a
probably larger area prior to this from actual OS storage, depending on the
values in the HEAP runtime option.
'dynamic_table_reset' itself just calls the 'dynamic_table_update_size' routine
with parameter 2 being "0 - current (logical) capacity". Essentially it sets
the current logical capacity back to 0. This causes
'dynamic_table_update_size' to do a call to CEECZST to set the physical
allocation back to the initial physical allocation (that value also having been
stored in the control block area). This routine would only be used in the case
where you want to "reset the table" back to zero records several times in a
program. Not a common occurence, I imagine, but my test case (a real life
production program!) does have one situation where this is required.
Specifically, CREDIT-TABLE represents all of the credits for one customer
account on any one day. Once we're done with the current account we "reset"
the table to "start over" for the next account.
'dynamic_table_release' simply calls CEEFRST to release the storage and set the
parameter (the address of the table) to NULL.
All of this has been tested extensively with one real life production program
that has FIVE tables that I converted to use these new routines.
I intend to post the source code for these routines to the "IBM COBOL Cafe" as
soon as I can get a good "test case" that is not an actual production program!
:-)
I'm sure most people have not read this far, but if you have I welcome any
comments.
None of this eliminates my desire for IBM to implement language support for
dynamic capacity tables. I just felt I'd waited long enough that I might as
well develop my own interm solution.
Frank
________________________________
From: IBM Mainframe Discussion List <[email protected]> on behalf of
Bill Woodger <[email protected]>
Sent: Wednesday, August 3, 2016 2:08 AM
To: [email protected]
Subject: COBOL 2014 dynamic capacity tables
If you are expecting this type of table to non-contiguous in any way, then that
breaks things.
You couldn't REDEFINES (you could "ban" it. No you can't. REDEFINES is banned
for OCCURS DEPENDING ON yet I use it a lot).
OK, for SEARCH, you could have special versions of the library routines (great,
two places to maintain stuff). But what about INSPECT, STRING, UNSTRING? Ban
those as well.
What about the code
an-entry ( a-subscript + n )
where + is +/- n is a literal numeric value. OK ban it.
What about CALL? Ban it.
What about anything except the particulars of the dynamic capacity table?
So, forget non-contiguous storage. So make the performance issue that of
keeping the table contiguous, implicitly. Like with any acquiring of storage,
"adding to it" can be a heavy process. An implicit process. Which newly-trained
CS people are not used to either knowing about or being concerned about.
Where in the DATA DIVISION would it be? LINKAGE SECTION again (non-Standard).
WORKING-STORAGE or LOCAL-STORAGE (breaks how they work currently)?
I don't think everything (by a long stretch) in the current COBOL Standard
(2014, replacing 2002) is a "good fit" for what we (that's me saying "I" and
hoping not to look entirely isolated) expect for a Mainframe COBOL.
Also, performance was not the only reason. There is "error prone" and also
these: "Some clients may have restrictions on using this. This is not in our
multi-year strategy."
It would be good, but perhaps not possible, if IBM were to outline their
multi-year strategy for COBOL. Avoids the rejection of RFEs which stood no
chance for that reason.
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN