It doesn't matter what OS is accessing the disk. The CMS minidisk file system was not designed for concurrent access while there is an active writer, as the reader has no way to know that the disk has become an active minefield.
In CMS, IIRC, the only thing persistent in cache is the directory, so re-ACCESSing the disk gets you an updated directory, though not necessarily a valid one. Any transient cached data blocks are discarded. The more often the disk changes, the less likely it is to be valid. Or, put another way, the more likely you are to see invalid control data or, worse, stale *valid* control data. Imagine reading a file that is a mix of data blocks from multiple files. (This is why we created SFS - we needed this all to just Go Away.) Linux has not only the directory cache, but the disk block cache. With CMS on different systems, we would see this same effect caused by the CP-managed minidisk cache. You have to turn off MDC for a minidisk on System B to see the changes made by System A. (This is something that SSI manages for you.) So, you not only have to unmount the file system to throw away any cached file system control structures, you have to ensure that you discard the cached data maintained by the block driver (control and data), AND you have to worry about MDC. (Watch out for 2nd level MDC, too!) As a matter of Computer Science, stale data is the Enemy. Without SFS (a database), you have window that cannot be closed unless you layer a control plane on top of (or within) the file system to manage concurrent access. Don't get me wrong. Getting data from a CMS disk is fine as long as you understand what's happening at the file system level and take steps to mitigate issues introduced by your usage. If you have automation to trigger an unmount and take the disk offline on all linked Linux servers before you update the CMS file, and then tell them all to bring it online and mount it again afterward, you're good to go. You might place a hash value in the first line of a file that will tell you if the rest of the file is self-consistent. It all depends on your level of paranoia and the impact of getting bad data. And think about *why* you are sharing with CMS. History? Habit? Inherent CMS coolness factor? For example, I see people using CMS disks to hold Linux network config data that they read at Linux boot. I suggest that they use persistent DHCP instead, if they really think they're going to change the IP address of a server. Otherwise, simply feed that data to the installation parm file creation process and leave CMS out of boot equation. Or, if you must, boot CMS first and pull information out of SFS onto a local CMS disk or put it in a TAG on the printer or .... Then boot Linux and have Linux use the local copy. No worries about concurrent disk updates. The tricks we used in the Before Times were good enough. There were more plusses than minuses. But today, the expectations are higher and the cost of failure is higher. "Hey. Let's be careful out there!" Regards, Alan Alan Altmark IBM Senior z/VM Engineer and Consultant 1 607 321 7556 (Mobile) alan_altm...@us.ibm.com > -----Original Message----- > From: Linux on 390 Port <LINUX-390@VM.MARIST.EDU> On Behalf Of Donald Russell > Sent: Friday, December 27, 2024 9:42 PM > To: LINUX-390@VM.MARIST.EDU > Subject: [EXTERNAL] Re: [LINUX-390] Accessing cms disks from Linux > > Thanks Rick, > Yes, I know and expect cms access to work that way. If I have an R/O link to > a dial and that disk > changes, I have to access it again. That’s because the access command copies > the disk directory to my > storage/memory. If the disk contents change, my in-memory copy doesn’t know > about it. That’s not > too bad, cms is a single user system. > > On Linux if I have to take the device offline/online to ensure I don’t see > stale data, that can impact > other Linux users. > > Good tip about flushing the cache… I’ll see if that’s an option. Even just > unmounting the file system > could fail if it’s in use. My bash script tries to unmount/mount. If the > unmount fails it issues a warning > saying the disk contents may be stale. > > I suppose another approach is to keep the device offline and serialize access > so a process that needs > those files would get exclusive use of a semaphore, bring the disk online, > mount it, use whatever, > unmount it, take it offline and finally release the semaphore so another > waiting process could access > it. That would ensure freshness but sure seems ugly in a multiuser > environment. 🙁 ---------------------------------------------------------------------- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www2.marist.edu/htbin/wlvindex?LINUX-390