Re: USS PFS Interface - 136 byte save area?

2016-12-08 Thread Steve Smith
The end-of-DSA, could be propagated in a field with the stack mechanism,
like the NAB is.  i.e. additional fields appended to the save area.
Also, Metal C allows for an "environment", which as best I can tell, sets
up a heap.  But it's optional unless you need a heap (several of the
library functions do).  It's similar to LE's CAA, rooted on R12.

sas

On Wed, Dec 7, 2016 at 9:37 AM, Tom Marchant <
000a2a8c2020-dmarc-requ...@listserv.ua.edu> wrote:

> On Wed, 7 Dec 2016 08:56:08 -0500, Steve Smith wrote:
>
> >I meant that as a general statement; it would need to be implemented by
> the
> >Metal C compiler.
>
> I understood that. There is still the problem of where to store the
> information
> so that it can be checked. Does Metal C have any global data areas that
> are available to all routines?
>
> >On Tue, Dec 6, 2016 at 5:15 PM, Tom Marchant <
> >000a2a8c2020-dmarc-requ...@listserv.ua.edu> wrote:
> >
> >> On Tue, 6 Dec 2016 16:26:24 -0500, Steve Smith wrote:
> >>
> >> >However, it's really very simple to add a stack-overflow check to entry
> >> >logic
> >>
> >> Yes, if you have the address of the end of the "stack". AFAICT, there
> >> is no place to store that information for each routine to check it.
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>



-- 
sas

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: USS PFS Interface - 136 byte save area?

2016-12-07 Thread Tom Marchant
On Wed, 7 Dec 2016 08:56:08 -0500, Steve Smith wrote:

>I meant that as a general statement; it would need to be implemented by the
>Metal C compiler.

I understood that. There is still the problem of where to store the information 
so that it can be checked. Does Metal C have any global data areas that 
are available to all routines?

>On Tue, Dec 6, 2016 at 5:15 PM, Tom Marchant <
>000a2a8c2020-dmarc-requ...@listserv.ua.edu> wrote:
>
>> On Tue, 6 Dec 2016 16:26:24 -0500, Steve Smith wrote:
>>
>> >However, it's really very simple to add a stack-overflow check to entry
>> >logic
>>
>> Yes, if you have the address of the end of the "stack". AFAICT, there
>> is no place to store that information for each routine to check it.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: USS PFS Interface - 136 byte save area?

2016-12-07 Thread Steve Smith
I meant that as a general statement; it would need to be implemented by the
Metal C compiler.

sas

On Tue, Dec 6, 2016 at 5:15 PM, Tom Marchant <
000a2a8c2020-dmarc-requ...@listserv.ua.edu> wrote:

> On Tue, 6 Dec 2016 16:26:24 -0500, Steve Smith wrote:
>
> >However, it's really very simple to add a stack-overflow check to entry
> >logic
>
> Yes, if you have the address of the end of the "stack". AFAICT, there
> is no place to store that information for each routine to check it.
>
> --
> Tom Marchant
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>



-- 
sas

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: USS PFS Interface - 136 byte save area?

2016-12-06 Thread Tom Marchant
On Tue, 6 Dec 2016 16:26:24 -0500, Steve Smith wrote:

>However, it's really very simple to add a stack-overflow check to entry
>logic

Yes, if you have the address of the end of the "stack". AFAICT, there 
is no place to store that information for each routine to check it.

-- 
Tom Marchant

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: USS PFS Interface - 136 byte save area?

2016-12-06 Thread Steve Smith
DXDs, aka pseudo-registers are lately somewhat generalized into "no-load"
binder classes.  C and other HLLs use them, but not for DSA/stack that I
know of.

I think the Metal C convention of pre-setting the savearea forward pointer
is pretty clever.  It doesn't interfere with any traditional save area
linkage, and is mighty efficient.

ALL HLLs use a stack.  LE handles stack overflow.  With Metal C, you place
your bet, test, and hope you've got it covered.  If your program is complex
enough to cause you doubt, maybe you should use LE.

However, it's really very simple to add a stack-overflow check to entry
logic, and I think IBM will have to do that someday for Metal C.  Abends
are infinitely better than memory corruption.

sas

On Tue, Dec 6, 2016 at 11:14 AM, Tony Harminc  wrote:

> On 6 December 2016 at 10:39, Tom Marchant <
> 000a2a8c2020-dmarc-requ...@listserv.ua.edu> wrote:
>
> > On Tue, 6 Dec 2016 09:20:33 -0500, Steve Smith wrote:
> >
> > >Metal C linkage requires your own forward chain to be preset to the NAB.
> > I
> > >think LE C just uses the NAB.  Regardless, that shouldn't cause a
> problem
> > >either.
> >
> > I never heard of such a thing, so I looked it up.
> >
> > It is a rather bizarre requirement that seems to me to contradict the
> > standard linkage conventions that it claims to follow.
> >
> > It means that a program must know not only its own storage
> > requirements, but the storage requirements of every program that it
> > calls, and every program that they call, etc.
> >
> > There is no mechanism to verify that the allocated storage hasn't been
> > exceeded, or to ensure that the storage that is obtained is adequate.
> >
>
> Well, there is... The traditional approach to this is to use DXD and CXD to
> have the Binder add up all the requirements at link time. This scheme is
> sometimes known as "Pseudo registers", and is (was?) used by PL/I. It works
> well; the only real downside is that it doesn't account for recursive
> calls, whether direct or indirect.
>
> Tony H.
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>



-- 
sas

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: USS PFS Interface - 136 byte save area?

2016-12-06 Thread Tony Harminc
On 6 December 2016 at 10:39, Tom Marchant <
000a2a8c2020-dmarc-requ...@listserv.ua.edu> wrote:

> On Tue, 6 Dec 2016 09:20:33 -0500, Steve Smith wrote:
>
> >Metal C linkage requires your own forward chain to be preset to the NAB.
> I
> >think LE C just uses the NAB.  Regardless, that shouldn't cause a problem
> >either.
>
> I never heard of such a thing, so I looked it up.
>
> It is a rather bizarre requirement that seems to me to contradict the
> standard linkage conventions that it claims to follow.
>
> It means that a program must know not only its own storage
> requirements, but the storage requirements of every program that it
> calls, and every program that they call, etc.
>
> There is no mechanism to verify that the allocated storage hasn't been
> exceeded, or to ensure that the storage that is obtained is adequate.
>

Well, there is... The traditional approach to this is to use DXD and CXD to
have the Binder add up all the requirements at link time. This scheme is
sometimes known as "Pseudo registers", and is (was?) used by PL/I. It works
well; the only real downside is that it doesn't account for recursive
calls, whether direct or indirect.

Tony H.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: USS PFS Interface - 136 byte save area?

2016-12-06 Thread Tom Marchant
On Tue, 6 Dec 2016 09:20:33 -0500, Steve Smith wrote:

>Metal C linkage requires your own forward chain to be preset to the NAB.  I
>think LE C just uses the NAB.  Regardless, that shouldn't cause a problem
>either.

I never heard of such a thing, so I looked it up.

It is a rather bizarre requirement that seems to me to contradict the 
standard linkage conventions that it claims to follow.

It means that a program must know not only its own storage 
requirements, but the storage requirements of every program that it 
calls, and every program that they call, etc.

There is no mechanism to verify that the allocated storage hasn't been 
exceeded, or to ensure that the storage that is obtained is adequate.

https://www.ibm.com/support/knowledgecenter/SSLTBW_2.2.0/com.ibm.zos.v2r2.ccrug00/mvslnkcnv.htm

-- 
Tom Marchant

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: USS PFS Interface - 136 byte save area?

2016-12-06 Thread Steve Smith
Metal C linkage requires your own forward chain to be preset to the NAB.  I
think LE C just uses the NAB.  Regardless, that shouldn't cause a problem
either.

Anyway, I know Jerry, and he certainly already knows this much.

sas

On Mon, Dec 5, 2016 at 4:38 PM, Tom Marchant <
000a2a8c2020-dmarc-requ...@listserv.ua.edu> wrote:

> On Mon, 5 Dec 2016 14:01:47 -0600, Jerry Callen wrote:
>
> >I'm writing a Physical File System (PFS) for USS, and I've
> >come across an oddity in the interface documentation. The
> >"Environment for PFS Operations" section of z/OS UNIX
> >System Services File System Interface Reference says that,
> >on entry to VFS function routines, R13 is "Save area address,
> >of a 136-byte save area".
>
> I have never heard of a 136-byte save area either. However,
> when a service documents its requirements for a save area,
> the only thing that is expected of the caller is to provide the
> address of a large enough area. If I was calling these routines,
> I would provide a 144-byte save area. If it uses only 136 bytes,
> there is no harm done. It may be that these routines use the
> save area passed in register 13 in some nonstandard way. In
> that case, IPCS won't be much help if there is an abend in the
> service routine.
>
> >I don't think there is such a beast. I'm guessing it means that
> >I should treat this as a 144-byte F4SA, and not chain forward
> >(using the "next" slot at +136 of the old save area).
>
> You don't treat the area that you pass to other programs in
> any way. If you want to forward chain, you do that in
> accordance with the save area format that you used to
> save your caller's registers. If you were passed a 72-byte
> save area that you used to save your caller's registers in
> standard 72-byte format, you would use the fullword at
> offset 8 for the forward chain, regardless of the size of the
> save area that you pass to other routines. When you
> provide a save area, you can call one program that requires
> a 144-byte save area and another that requires a 72-byte
> save area. You would use the same 144-byte save area for
> both calls. Each program would use what it needs.
>
> --
> Tom Marchant
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>



-- 
sas

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: USS PFS Interface - 136 byte save area?

2016-12-05 Thread Tom Marchant
On Mon, 5 Dec 2016 14:01:47 -0600, Jerry Callen wrote:

>I'm writing a Physical File System (PFS) for USS, and I've 
>come across an oddity in the interface documentation. The 
>"Environment for PFS Operations" section of z/OS UNIX 
>System Services File System Interface Reference says that, 
>on entry to VFS function routines, R13 is "Save area address, 
>of a 136-byte save area". 

I have never heard of a 136-byte save area either. However, 
when a service documents its requirements for a save area, 
the only thing that is expected of the caller is to provide the 
address of a large enough area. If I was calling these routines, 
I would provide a 144-byte save area. If it uses only 136 bytes, 
there is no harm done. It may be that these routines use the 
save area passed in register 13 in some nonstandard way. In 
that case, IPCS won't be much help if there is an abend in the 
service routine.

>I don't think there is such a beast. I'm guessing it means that 
>I should treat this as a 144-byte F4SA, and not chain forward 
>(using the "next" slot at +136 of the old save area). 

You don't treat the area that you pass to other programs in 
any way. If you want to forward chain, you do that in 
accordance with the save area format that you used to 
save your caller's registers. If you were passed a 72-byte 
save area that you used to save your caller's registers in 
standard 72-byte format, you would use the fullword at 
offset 8 for the forward chain, regardless of the size of the 
save area that you pass to other routines. When you 
provide a save area, you can call one program that requires 
a 144-byte save area and another that requires a 72-byte 
save area. You would use the same 144-byte save area for 
both calls. Each program would use what it needs.

-- 
Tom Marchant

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


USS PFS Interface - 136 byte save area?

2016-12-05 Thread Jerry Callen
I'm writing a Physical File System (PFS) for USS, and I've come across an 
oddity in the interface documentation. The "Environment for PFS Operations" 
section of z/OS UNIX System Services File System Interface Reference says that, 
on entry to VFS function routines, R13 is "Save area address, of a 136-byte 
save area". I don't think there is such a beast. I'm guessing it means that I 
should treat this as a 144-byte F4SA, and not chain forward (using the "next" 
slot at +136 of the old save area). 

Has anyone else used this interface, and if so, can you confirm that this is 
how it's supposed to work? This is supervisor-state, key-zero code, some I'm 
eager to get it right the first time. :-) Thanks!

-- Jerry

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN