My experience with Linux (RHEL) under zVM is that Linux tries to cache as much of the file system(s) as it can. To me this is a waste of RAM.

So here is an interesting situation: Linux has its file systems and then MOUNTS a read only system via IP that is on another platform (NFS?). So this is being cached -- but it is read only, so if any data gets updated, it was updated by the platform that owns that file structure.

I tried to get the Linux people to do an experiment: turn off file caching in Linux. See if the z/Arch caching would be just as fast or nearly so.

Why was I pushing for this? Because ACE (replacement for IIB) is a CORE HOG as we used to say. In migrating to it, it immediately needed 1 GB more of RAM at a minimum. So in running production we had those servers using 16-32GB of RAM (eventually some were out at 64GB!!), with us also specifying VDISK for swap space (VDisk is actually c-Store or RAM being used as disk as far as Linux could tell, so that it was effectively E-Store for paging).

Just getting Linux to stop caching file systems could possibly get us back 20% of the RAM in an LPAR running Linux for z. (Long story to explain this but yes, we could reduce the RAM in those servers by 1-2 GB each if we could prove no impact to production response).

Oh, and you do not page Linux -- You have it do all its own paging. Why? Paging paging is like going back to shadow tables and the like for running MVS under VM/370 prior to IEF/SIE. (this being Assembler and not specifically a VM group, IEF is the Interpretive Execution Facility, which has a single instruction, SIE (Start Interpretive Execution)). With that instruction, any time the guest does something that would affect all users, SIE takes an interrupt, drops out to CP, CP then figures out what really needs to be done, does it, and then returns back to the guest via SIE so that the O/S thinks it did whatever. This is the short explanation. Since I never had access to IEF at Amdahl I only know this much of SIE and since those days we are several versions of IEF beyond what we did back then.
------
Any one know of anyone that has done this experiment to see if z/Arch caching worked as well as Linux caching a file system?


Steve Thompson

On 8/8/2023 4:30 AM, [email protected] wrote:
-----Original Message-----
From: IBM Mainframe Assembler List <[email protected]>
On Behalf Of Jon Perryman
Sent: Monday, August 7, 2023 11:05 PM
To: [email protected]
Subject: Re: Will z/OS be obsolete in 5 years?

On Thu, 20 Jul 2023 at 09:01, Rob van der Heij <[email protected]> wrote:
It would be interesting to see your evidence of IBM Z not performing well with
Linux.

Linux on z performs better than Linux on most other hardware. My point is that
Linux wastes much of z hardware.

Since I haven't seen Linux on z, I have to make some assumptions. It's probably
fair to say the Linux filesystem still uses block allocation. Let's say it's a 
10 disk
filesystem and 100 people are writing 1 block repeatedly at the same time. After
each writes 10 blocks, where are the 10 blocks for a specific user.

In z/OS you
know exactly where those blocks would be in the file. If you read that file are
these blocks located sequentially. While the filesystem can make a few
decisions, it's nothing close to the planning provided by SMS, HSM, SRM and
other z/OS tools.
Yes but do you really? If you allocate a fixed file size you are wasting the 
un-used space at the end of the file, or if you run out of space its going 
elsewhere.
I would argue that Linux is better at using disk capacity as you only ever 
waste half a block. Yes they might be scattered but how much data is on 
spinning disk and how much on SSD?

Like MS Windows disks, Linux filesystems can benefit from
defrag.  Also consider when Linux needs more CPUs than available. Clustering
must be implemented on Linux to increase the number of CPU which does not
share the filesystem. In z/OS, a second box has full access to all files 
because of
Sysplex.

If the data is in a SAN multiple systems can access them without a SYSPLEX...

I'm sure IBM has made improvements but some design limitations will be
difficult to resolve without the correct tools. For instance,  can DB2 for 
Linux on
z share a database across multiple z frames. It's been a while since I last 
looked
but DB2 for z/OS was used because it outperformed DB2 for Linux on z.
Why use DB2?

Dave

Reply via email to