Speaking of skulker, I have implemented its use via cron to keep /tmp cleaned 
up.   There are a few files that get written there by some life of IPL started 
tasks(mostly TCPIP related) that do exceed the time frame skulker is cleaning 
up, so I created another script that runs prior to skulker that simply does a 
touch -c to update the date/time stamp of the file if it already exists for a 
list of "loved files".   I didn’t initially create this script as most of the 
tasks didn’t complain, until we implemented the IOBSNMP extension of SNMP and 
it writes a dip_socket file there, that if deleted, causes problems for it.

_________________________________________________________________
Dave Jousma
Manager Mainframe Engineering, Assistant Vice President
[email protected]
1830 East Paris, Grand Rapids, MI  49546 MD RSCB2H
p 616.653.8429
f 616.653.2717


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:[email protected]] On Behalf 
Of John McKown
Sent: Monday, December 04, 2017 9:29 AM
To: [email protected]
Subject: Re: Passing data from step-to-step in single job using memory??

**CAUTION EXTERNAL EMAIL**

**DO NOT open attachments or click on links from unknown senders or unexpected 
emails**

On Sun, Dec 3, 2017 at 10:39 PM, David Crayford <[email protected]> wrote:

> You don't need to be authorized to use z/OS UNIX shared memory segments.
> You do need access to the file system and the memory segments are 
> protected using the normal UNIX permissions. Semaphores and pthread 
> mutexes can also reside in shared memory for inter-process locking.
>
> The /tmp TFS is backed by memory so the easiest solution may be to us 
> files in the TFS.
>
>
​The only problem that I have ever had with using /tmp is that, if it were 
"released" (i.e. documented to & it's use encouraged) to programmers, then I 
would run into complaints about "running out" of space on /tmp and file name 
collisions. E.g. one job using up all the space in /tmp due to the JCL writing 
believing it would just be a few records, but then a "bug" cause it to try to 
write millions (this has happened here). ​Also, our programmers at least, tend 
to be "lazy" and expect z/OS to "clean up" after them (some of them don't even 
CLOSE their data sets). So I would need to manage the files & directories in 
/tmp, most likely using skulker. And, of course, that means that eventually 
somebody will whine that one of their files got deleted.

​I have addressed a _UNIX shell user_ using too much "temp" space by making
/tmp2 be an "automount controlled" mount point and forcing all the "temporary 
file location" environment variables, such as TMP, TMPDIR, TEMP, et al., to be 
/tmp2/&SYSUID. So when a shell user uses a properly coded shell command, they 
go into their own zFS filesystem data set.​


--
I have a theory that it's impossible to prove anything, but I can't prove it.

Maranatha! <><
John McKown

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
[email protected] with the message: INFO IBM-MAIN **CAUTION EXTERNAL 
EMAIL**

**DO NOT open attachments or click on links from unknown senders or unexpected 
emails**

This e-mail transmission contains information that is confidential and may be 
privileged.   It is intended only for the addressee(s) named above. If you 
receive this e-mail in error, please do not read, copy or disseminate it in any 
manner. If you are not the intended recipient, any disclosure, copying, 
distribution or use of the contents of this information is prohibited. Please 
reply to the message immediately by informing the sender that the message was 
misdirected. After replying, please erase it from your computer system. Your 
assistance in correcting this error is appreciated.


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to