If you disagree, then develop the solution and submit as a patch and it
will be considered. Since you consider it a trivial effort, it shouldn't be
difficult.
Rick
On Wed, Aug 22, 2018 at 7:51 AM Hobart Spitz <orexx...@gmail.com> wrote:
> Anybody else disagree?
>
> What's your solution to the issues of time-sensitivity and/or large GC
> working sets?
>
> OREXXMan
> JCL is the buggy whip of 21st century computing. Stabilize it.
> Put Pipelines in the z/OS base. Would you rather process data one
> character at a time (Unix/C style), or one record at a time?
> IBM has been looking for an HLL for program products; REXX is that
> language.
>
> On Wed, Aug 22, 2018 at 7:40 AM, Rick McGuire <object.r...@gmail.com>
> wrote:
>
>> Sorry, not going to happen. Neither trivial to implement or useful. It is
>> generally very bad practice to try to "help" the language make the garbage
>> collection decisions. It invariably leads to worse performance, not better.
>>
>> Rick
>>
>> On Wed, Aug 22, 2018 at 7:25 AM Hobart Spitz <orexx...@gmail.com> wrote:
>>
>>> Let's make it simple:
>>>
>>> We add a function called (as a working title) SysStorManage defined as
>>> follows:
>>>
>>> SysStorManage(argument):
>>>
>>> Argument - a character string consisting of one or more of these
>>> characters (in upper or lower case, in any combination).
>>>
>>> A - requests results reported in an array of values.
>>>
>>> F - reports the amount of "yet to be used space available before the
>>> next GC", i.e. free space.
>>>
>>> G - forces garbage collection and reports P value (below).
>>>
>>> P - reports 100*F/T, the percent of free space still available
>>> to-be-used.
>>>
>>> S - requests results reported as a string of one or more blank-delimited
>>> numbers. This is the default.
>>>
>>> T - reports the total amount of storage (used and freed) in the
>>> compressible pool. This is fixed from the time the data pool is created.
>>>
>>> Return value:
>>>
>>> Per A or S, a sequence of numbers corresponding to the requested values
>>> to be reported.
>>>
>>> The actions are done in order:
>>>
>>> "GF" causes GC and reports the new P value followed by the now free
>>> space value.
>>> "FG" reports the free space as it was before doing the GC, followed by
>>> the new P value.
>>>
>>>
>>> This allows program(s) to:
>>>
>>> 1. Gather an estimate(s) of the typical usage of a section(s) of
>>> code. Call the function before and after, and report/track the
>>> difference.
>>> 2. Cause a GC at before entering a time critical point block of code.
>>> 3. Make it's own decision on when it wants to force an "early" GC.
>>> 4. Something like: if SysStorManage("P") > 65 then call
>>> SysStorManage "G"
>>> 5. Anything else along these lines.
>>>
>>> Even someone who doesn't know what they are talking about can see that
>>> this is both trivial to implement and eminently useful.
>>>
>>
>>>
>>> OREXXMan
>>> JCL is the buggy whip of 21st century computing. Stabilize it.
>>> Put Pipelines in the z/OS base. Would you rather process data one
>>> character at a time (Unix/C style), or one record at a time?
>>> IBM has been looking for an HLL for program products; REXX is that
>>> language.
>>>
>>> On Tue, Aug 21, 2018 at 7:25 AM, Rick McGuire <object.r...@gmail.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Tue, Aug 21, 2018 at 7:06 AM Hobart Spitz <orexx...@gmail.com>
>>>> wrote:
>>>>
>>>>> My bad for assuming the obvious.
>>>>>
>>>>> In #1, I was referring the free space yet "to be given out". If I'm
>>>>> entering a block that I estimate will need 100K (or 1%) of the total space
>>>>> pool, and that number was less, GC would be forced, even tho space was not
>>>>> fully exhausted. The program could be reasonably assured to execute
>>>>> without running into a GC.
>>>>>
>>>>
>>>> This is not something that is predictable in advance. The best that can
>>>> be done is to try to maintain a minimum ratio of live-to-free storage in
>>>> the heap. This is a determination that is done after a collection to try to
>>>> prevent GC thrashing.
>>>>
>>>>
>>>>>
>>>>> In #2, I was referring to the free space recovered so far. Presumably
>>>>> one could subtract address of the next uncompressed item from the address
>>>>> of the end of the last compressed item. The space in between would be
>>>>> free
>>>>> and the GC could be terminated at that point, satisfying the requirements
>>>>> of the soon to be executed code.
>>>>>
>>>>
>>>> This makes no sense at all. By this point, the expensive part of the GC
>>>> is complete (the marking operation). Terminating the sweep early just
>>>> because you have a block of sufficient size would prevent other
>>>> optimizations and also force the GC operation to be performed more often.
>>>>
>>>>
>>>>>
>>>>> In #4, a little analysis should be able to tell you what variables
>>>>> were entirely local. DROPing them could include adding them to a free
>>>>> space list, no reference count needed.
>>>>>
>>>>>
>>>>> Seriously, this is really a place where you don't really understand
>>>> what you are talking about. There are way more things going on than just
>>>> what local variables are local. Object references get passed around,
>>>> inserted in collections, etc. The marking operation is the only way to
>>>> identify storage that can be reclaimed.
>>>>
>>>> Rick
>>>>
>>>>
>>>>>
>>>>> OREXXMan
>>>>> JCL is the buggy whip of 21st century computing. Stabilize it.
>>>>> Put Pipelines in the z/OS base. Would you rather process data one
>>>>> character at a time (Unix/C style), or one record at a time?
>>>>> IBM has been looking for an HLL for program products; REXX is that
>>>>> language.
>>>>>
>>>>> On Tue, Aug 21, 2018 at 5:47 AM, Rick McGuire <object.r...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> We know how much free space is still available to be given out, but
>>>>>> we don't know the total amount of free space there is because it is
>>>>>> likely
>>>>>> a lot of the previously given out storage is now dead objects. A GC event
>>>>>> is a fairly expensive process, so you don't want to do one until you
>>>>>> really
>>>>>> need to, which also maximizes the amount of storage that gets swept up
>>>>>> and
>>>>>> also reduces memory fragmentation.
>>>>>>
>>>>>> Rick
>>>>>>
>>>>>> On Tue, Aug 21, 2018 at 4:45 AM Hobart Spitz <orexx...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> How could you not know how much free space was left and still be
>>>>>>> able to determine if a new request can be satisfied?
>>>>>>>
>>>>>>> OREXXMan
>>>>>>> JCL is the buggy whip of 21st century computing. Stabilize it.
>>>>>>> Put Pipelines in the z/OS base. Would you rather process data one
>>>>>>> character at a time (Unix/C style), or one record at a time?
>>>>>>> IBM has been looking for an HLL for program products; REXX is that
>>>>>>> language.
>>>>>>>
>>>>>>> On Mon, Aug 20, 2018 at 5:52 AM, Rick McGuire <object.r...@gmail.com
>>>>>>> > wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Aug 19, 2018 at 8:23 PM Hobart Spitz <orexx...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> It's might be too late in the game to make these kinds of
>>>>>>>>> comments, but I'll pose them in case it's not. Failing that, the
>>>>>>>>> discussion might be useful as input to future requirement(s). I have
>>>>>>>>> not
>>>>>>>>> looked into the current code for allocation and garbage collection,
>>>>>>>>> so my
>>>>>>>>> bad if this is redundant or too far afield.
>>>>>>>>>
>>>>>>>>> Have one, more, or all of these features, singly or in
>>>>>>>>> combination, added:
>>>>>>>>>
>>>>>>>>> 1. A statement/function/hostcommand/option to force garbage
>>>>>>>>> collection if the amount of free space was less than a specified
>>>>>>>>> amount
>>>>>>>>> and/or percentage. It would be issued before entry into a
>>>>>>>>> time-critical
>>>>>>>>> section of code. I would propose a default percentage of 50%.
>>>>>>>>>
>>>>>>>>> Not really possible. The only way to actually know the amount of
>>>>>>>> available free space is to perform a garbage collection first, which
>>>>>>>> is a
>>>>>>>> very expensive operation that we try to avoid. The algorithm does
>>>>>>>> include
>>>>>>>> heap expansion when the ratio of live to free storage falls below a
>>>>>>>> threshold.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> 1.
>>>>>>>>> 2. A statement/function/hostcommand/option to stop automatic
>>>>>>>>> garbage collection if the amount of space freed was greater than a
>>>>>>>>> specified amount and/or percentage. It would be issued before
>>>>>>>>> sections of
>>>>>>>>> code where short delays can be tolerated, but the long delays
>>>>>>>>> associated
>>>>>>>>> with full garbage collection would be less so. I again propose a
>>>>>>>>> 50%
>>>>>>>>> default.
>>>>>>>>>
>>>>>>>>> Again, not possible. Garbage collections get performed whenever
>>>>>>>> we're unable to satisfy an allocation request.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> 1. A modification (possibly applying to certain objects only,
>>>>>>>>> e.g. large) that would maintain a linked list of free space
>>>>>>>>> elements
>>>>>>>>> populated each time an eligible object (e.g. large) is completely
>>>>>>>>> DROPped
>>>>>>>>> (ref count = 0). A reference count might need to be added to the
>>>>>>>>> relevant
>>>>>>>>> object types as part of the modification. The feature would allow
>>>>>>>>> the
>>>>>>>>> garbage collection to terminate quickly if there were enough space
>>>>>>>>> in the
>>>>>>>>> free-space list. A first-fit algorithm might be appropriate here.
>>>>>>>>>
>>>>>>>>> There are no reference counts on objects, and in general,
>>>>>>>> reference counting garbage collection systems are really not feasible
>>>>>>>> because circular references are possible (for example, an array that
>>>>>>>> contains a reference to itself).
>>>>>>>>
>>>>>>>>>
>>>>>>>>> 1. The addition of a reference count, or similar, and a
>>>>>>>>> linked-list of free-space elements, sorted by address. Elements
>>>>>>>>> would be
>>>>>>>>> added opportunistically where it is known that the object had no
>>>>>>>>> additional
>>>>>>>>> usage; e.g. local variables in a PROCEDURE, ::ROUTINE, or ::METHOD
>>>>>>>>> not
>>>>>>>>> passed to another block. Adjacent free-space elements would be
>>>>>>>>> combined as
>>>>>>>>> detected.
>>>>>>>>>
>>>>>>>>> Again, reference counts are not possible. Defection of live
>>>>>>>> objects is done by tracing and marking objects from a few root
>>>>>>>> references.
>>>>>>>> The free storage is "swept up" by scanning memory looking for unmarked
>>>>>>>> objects and reclaiming that storage. Adjacent dead objects are already
>>>>>>>> recombined into larger blocks.
>>>>>>>>
>>>>>>>> Rick
>>>>>>>>
>>>>>>>>
>>>>>>>>> An advantage, in addition to more consistent timing, might the
>>>>>>>>> reduction in working set size, page reference, and page modification
>>>>>>>>> in
>>>>>>>>> paging virtual memory systems. Shortened garbage collection events
>>>>>>>>> could
>>>>>>>>> enable an operating system to treat the process more favorably,
>>>>>>>>> either by
>>>>>>>>> policy or of incidentally, than if a full-blown garbage collection
>>>>>>>>> were
>>>>>>>>> done.
>>>>>>>>>
>>>>>>>>> The above capabilities might allow greater use of OOREXX in new
>>>>>>>>> areas, where timing is critical, and potentially even up to operating
>>>>>>>>> system components.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> OREXXMan
>>>>>>>>> JCL is the buggy whip of 21st century computing. Stabilize it.
>>>>>>>>> Put Pipelines in the z/OS base. Would you rather process data one
>>>>>>>>> character at a time (Unix/C style), or one record at a time?
>>>>>>>>> IBM has been looking for an HLL for program products; REXX is that
>>>>>>>>> language.
>>>>>>>>>
>>>>>>>>> On Sun, Aug 19, 2018 at 6:29 PM, Rick McGuire <
>>>>>>>>> object.r...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> After the recent discussion about freeing large array storage, I
>>>>>>>>>> decided to take another crack at fixing this problem. My largeobject
>>>>>>>>>> sandbox branch is my latest attempt at this. This version is actually
>>>>>>>>>> simpler than the current one, which never really fixed the issues.
>>>>>>>>>> This
>>>>>>>>>> version treats "big" objects differently and will return the memory
>>>>>>>>>> to the
>>>>>>>>>> system when the object is garbage collected. A "big" object is
>>>>>>>>>> currently
>>>>>>>>>> defined as greater than 1/2 meg for the 32-bit version and 1 meg for
>>>>>>>>>> the
>>>>>>>>>> 64-bit version. Debugging this also uncovered a couple of bugs in the
>>>>>>>>>> current memory manager (already fixed).
>>>>>>>>>>
>>>>>>>>>> A big part of the simplification came from removing a bit of
>>>>>>>>>> baggage from the original OS/2 code that doesn't really apply any
>>>>>>>>>> more.
>>>>>>>>>> Removing that, allowed a lot of additional code to be deleted that
>>>>>>>>>> probably
>>>>>>>>>> was never getting used any way. I'm fairly happy with this version,
>>>>>>>>>> though
>>>>>>>>>> I might tweak some of the tuning parameters a little as I throw some
>>>>>>>>>> additional tests at it.
>>>>>>>>>>
>>>>>>>>>> Rick
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> ------------------------------------------------------------------------------
>>>>>>>>>> Check out the vibrant tech community on one of the world's most
>>>>>>>>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>>>>>>>>>> _______________________________________________
>>>>>>>>>> Oorexx-devel mailing list
>>>>>>>>>> Oorexx-devel@lists.sourceforge.net
>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/oorexx-devel
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> ------------------------------------------------------------------------------
>>>>>>>>> Check out the vibrant tech community on one of the world's most
>>>>>>>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>>>>>>>>> _______________________________________________
>>>>>>>>> Oorexx-devel mailing list
>>>>>>>>> Oorexx-devel@lists.sourceforge.net
>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/oorexx-devel
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> ------------------------------------------------------------------------------
>>>>>>>> Check out the vibrant tech community on one of the world's most
>>>>>>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>>>>>>>> _______________________________________________
>>>>>>>> Oorexx-devel mailing list
>>>>>>>> Oorexx-devel@lists.sourceforge.net
>>>>>>>> https://lists.sourceforge.net/lists/listinfo/oorexx-devel
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> ------------------------------------------------------------------------------
>>>>>>> Check out the vibrant tech community on one of the world's most
>>>>>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>>>>>>> _______________________________________________
>>>>>>> Oorexx-devel mailing list
>>>>>>> Oorexx-devel@lists.sourceforge.net
>>>>>>> https://lists.sourceforge.net/lists/listinfo/oorexx-devel
>>>>>>>
>>>>>>
>>>>>>
>>>>>> ------------------------------------------------------------------------------
>>>>>> Check out the vibrant tech community on one of the world's most
>>>>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>>>>>> _______________________________________________
>>>>>> Oorexx-devel mailing list
>>>>>> Oorexx-devel@lists.sourceforge.net
>>>>>> https://lists.sourceforge.net/lists/listinfo/oorexx-devel
>>>>>>
>>>>>>
>>>>>
>>>>> ------------------------------------------------------------------------------
>>>>> Check out the vibrant tech community on one of the world's most
>>>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>>>>> _______________________________________________
>>>>> Oorexx-devel mailing list
>>>>> Oorexx-devel@lists.sourceforge.net
>>>>> https://lists.sourceforge.net/lists/listinfo/oorexx-devel
>>>>>
>>>>
>>>>
>>>> ------------------------------------------------------------------------------
>>>> Check out the vibrant tech community on one of the world's most
>>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>>>> _______________________________________________
>>>> Oorexx-devel mailing list
>>>> Oorexx-devel@lists.sourceforge.net
>>>> https://lists.sourceforge.net/lists/listinfo/oorexx-devel
>>>>
>>>>
>>>
>>> ------------------------------------------------------------------------------
>>> Check out the vibrant tech community on one of the world's most
>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>>> _______________________________________________
>>> Oorexx-devel mailing list
>>> Oorexx-devel@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/oorexx-devel
>>>
>>
>>
>> ------------------------------------------------------------------------------
>> Check out the vibrant tech community on one of the world's most
>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>> _______________________________________________
>> Oorexx-devel mailing list
>> Oorexx-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/oorexx-devel
>>
>>
>
> ------------------------------------------------------------------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> _______________________________________________
> Oorexx-devel mailing list
> Oorexx-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/oorexx-devel
>
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Oorexx-devel mailing list
Oorexx-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/oorexx-devel