Re: z/Linux 32-bit modules

2018-05-23 Thread Timothy Sipples
Paul Edwards wrote:
>I don't want to use -m64 because that uses the
>64-bit registers for everything, but I wish to produce
>compact modules using only 32-bit registers and
>pointers.

OK, so let's dig into this a bit. Have you taken one or more of your
programs and compared -m31 and -m64 variants? How much more compact is the
-m31 variant? Have you got any indication(s) of what impact(s) that
difference yields, such as a performance impact? Quantifying the potential
benefit is important.

By the way, Java and Java run-times are agnostic to such issues. In IBM's
64-bit JVMs, including those for z/OS and for Linux on Z/LinuxONE, there's
an interesting "halfway house" feature called "compressed references." This
feature is automatically enabled when the Java heap size is configured
below a certain amount which varies depending on platform and JVM release
level but is never less than 25 GiB minus 16 bytes. "Compressed references"
means that Java object references are stored in 32-bit representation, so
the object size is the same as a 32-bit object. I'll let IBM explain more:

"As the 64-bit objects with compressed references are smaller than default
64-bit objects, they occupy a smaller memory footprint in the Java heap.
This results in improved data locality, memory utilization, and
performance. You might consider using compressed references if your
application uses a lot of native memory and you want the VM to run in a
small footprint."

In that particular set of use cases that IBM describes, evidently there's
enough of a benefit with compressed references in Java. Otherwise,
presumably IBM wouldn't have implemented the feature.

You could do something similar in C programs, I imagine. You'd still
compile -m64, but you'd embed "bracketed" AMODE31 code (with 2 GiB
addressing) as/where it makes performance/compactness sense, if it makes
sense. At least, that's my broad understanding of how it'd work. Moreover,
conceivably an optimizing compiler could do this for you, perhaps with some
"hinting," analogous to how IBM's JVM and JIT handles this optimization
with its compressed references.

That brings up an interesting point about running compactness tests. It'd
be best to run a couple tests using the latest releases of the optimizing
compilers, and to direct them to do as much optimization as they know how.
I know of four C/C++ compilers for Linux on Z/LinuxONE:

* GNU (gcc family)
* Clang/LLVM
* IBM XL C/C++
* Dignus

If you can run tests with them all across at least a couple of your
programs, fantastic. There's a trial edition of IBM's compiler here:

https://www.ibm.com/developerworks/downloads/r/xlcpluslinuxonz/index.html

Dignus has a Web-based trial which might be enough for these purposes.
Details here:

http://www.dignus.com/products.shtml

Does anyone happen to know if expanded storage and/or data spaces would be
relevant and useful here?

Finally, I don't think there's a strong argument for *disk* storage
compactness of program modules, within reason. Apple seems to have no
trouble now distributing only 64-bit mobile apps, even if they might be
slightly larger stored on the (relatively tiny) flash media in their 64-bit
iPhones, iPads, and iPod touches. Memory and especially processor resource
efficiency could be interesting if it's significant, but maybe this is an
optimizing compiler job rather than a kernel one?


Timothy Sipples
IT Architect Executive, Industry Solutions, IBM Z & LinuxONE,
Multi-Geography
E-Mail: sipp...@sg.ibm.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: 7.5 package levels

2018-05-23 Thread Timothy Sipples
Russ Herrold wrote:
>It may turn out that we (ClefOS) need to fork and offer two
>variants

I guess I'd call them "streams" rather than "forks."

For what it's worth, Red Hat seems to offer at least 3 major streams now:
Fedora (their "community" release), RHEL Structure A, and RHEL. The RHEL
Structure A/RHEL pair of streams is a unique offering for the s390x
architecture branch, at least for now. (Is it a one-time aberration or the
start of something new? I have no idea, so ask Red Hat, I guess.) In RHEL
7.5, Red Hat decided to offer kernel 3.10 (only) for all POWER processors
prior to POWER9, and (only) kernel 4.14 for POWER9. For X86-64 it's only
3.10, and for ARM64 it's only 4.14.

There are certain newer capabilities that RHEL 7.5 doesn't support on s390x
that RHEL 7.5 Structure A does. Red Hat's release notes explain all that.
But it's possible to mix RHEL and RHEL Structure A instances on the same
machine and in a Red Hat supported way. (And, for that matter, other
supported RHEL releases.)

It looks like the minimum RHEL 7.5/RHEL 7.5 Structure A machine model
requirement hasn't changed since RHEL 7.4, so it's z196/z114 processors or
higher, which includes all LinuxONE machines.

I don't have a strong view on the "right" approach for Linux release
streams. It really depends on end users and what they prefer, and they
might choose particular Linux distributors based on their different
release/service stream approaches. There are some important principles,
though. I'd say that maintaining security currency is quite important, as a
notable example. But that'll likely mean not waiting too long to exploit
new system features since many of those new features are often
security-related.


Timothy Sipples
IT Architect Executive, Industry Solutions, IBM Z & LinuxONE,
Multi-Geography
E-Mail: sipp...@sg.ibm.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: z/Linux 32-bit modules

2018-05-23 Thread Joe Monk
Paul,

Did you read the Linux ABI document that I linked to you?

Joe

On Wed, May 23, 2018 at 5:19 PM, Paul Edwards  wrote:

> Hi Philipp.
>
> Which CPU instruction do you think a -m31 compile
> produces that won't work in AM64 mode when
> malloc() starts returning addresses between 2 GiB
> and 4 GiB? I can't think of any. As far as I know a
> -m24, -m31 or -m32 would produce identical code
> if those options were available. People would have
> to go to some lengths to make -m24 and -m31
> produce non-32-bit-clean code. I don't think anyone
> has gone to that length.
>
> BFN. Paul.
>
>
>  utm_source=link_campaign=sig-email_content=webmail>
> Virus-free.
> www.avg.com
>  utm_source=link_campaign=sig-email_content=webmail>
> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: z/Linux 32-bit modules

2018-05-23 Thread Paul Edwards
Hi Philipp.

Which CPU instruction do you think a -m31 compile
produces that won't work in AM64 mode when
malloc() starts returning addresses between 2 GiB
and 4 GiB? I can't think of any. As far as I know a
-m24, -m31 or -m32 would produce identical code
if those options were available. People would have
to go to some lengths to make -m24 and -m31
produce non-32-bit-clean code. I don't think anyone
has gone to that length.

BFN. Paul.



Virus-free.
www.avg.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Bug with XFS and SLES 12SP3 kernel-default-4.4.131-94.29-default

2018-05-23 Thread R P Herrold
On Wed, 23 May 2018, Ted Rodriguez-Bell wrote:

> Suse just released a new kernel-default-4.4.131-94.29-1
> package for SLES 12SP3 with some kernel security fixes.  If
> you use XFS, don't install it!

My condolences

There has been an (at least) four way finger pointing contest
raging for the last couple of months (on the Fedoraproject
front) between:
- xfs developers
- systemd developers
- dracut developers
- innocent victims of XFS

Here is a trailhead to read back and forth from:
http://tinyurl.com/y7d2a4he

I have a number of Red Hat Bugzilla reports I track, I have
corresponded twice privately with the Fedoraproject lead about
applying dynamite to at least get the innocent victims out of
XFS' harm's way, and ... bupkis.  I have to say the problem
looks well and truly wedged, and XFS and data I care about
will never meet

In response, I went into our local deployment SOP's and added
a general prohibition on using XFS, absent explicit prior
approval from an admin or greater level authority.  The ONLY
exception I can see being sought relates to extremely large
filesystems

-- Russ herrold

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Bug with XFS and SLES 12SP3 kernel-default-4.4.131-94.29-default

2018-05-23 Thread Ted Rodriguez-Bell
Suse just released a new kernel-default-4.4.131-94.29-1 package for SLES 12SP3 
with some kernel security fixes.  If you use XFS, don't install it!
I installed it and the system wouldn't mount an existing XFS filesystem.  Our 
friendly Suse engineer (thanks, Roberto!) told me that it's a regression in the 
kernel.  A *NEW* XFS filesystem created under this kernel mounted under both 
4.4.131 and the older 4.4.126.
The biggest fix with this kernel is that now we have a file for speculative 
store bypass.  If you cat it 
(/sys/devices/system/cpu/vulnerabilities/spec_store_bypass) you get "Not 
Affected".  So I don't think waiting for the fix will significantly harm 
anyone's system security...
If anyone cares, our ticket is SR101165092091
Ted Rodriguez-Bell
Wells Fargo Mainframe and Midrange Services
te...@wellsfargo.com

Company policy requires:  This message may contain confidential and/or 
privileged information.  If you are not the addressee or authorized to receive 
this for the addressee, you must not use, copy, disclose, or take any action 
based on this message or any information herein.  If you have received this 
message in error, please advise the sender immediately by reply e-mail and 
delete this message.  Thank you for your cooperation.


_
From: Linux on 390 Port  On Behalf Of LINUX-390 
automatic digest system
Sent: Tuesday, May 22, 2018 9:00 PM
To: LINUX-390@VM.MARIST.EDU
Subject: LINUX-390 Digest - 21 May 2018 to 22 May 2018 (#2018-90)


 << Message: LINUX-390 Digest - 21 May 2018 to 22 May 2018 (#2018-90) >>  << 
Message: Re: z/Linux 32-bit modules >>  << Message: z/VM and Linux on Z 
Performance Class >>  << Message: 7.5 package levels >>  << Message: Re: 7.5 
package levels >>  << Message: RedHat/ClefOS configuration file >>  << Message: 
Re: 7.5 package levels >>  << Message: Re: RedHat/ClefOS configuration file >>  
<< Message: Re: 7.5 package levels >>  << Message: Re: 7.5 package levels >>  
<< Message: Red Hat/ClefOS configuration file >>  << Message: Re: RedHat/ClefOS 
configuration file >>  << Message: Re: RedHat/ClefOS configuration file >>  << 
Message: Re: RedHat/ClefOS configuration file >>  << Message: Re: Red 
Hat/ClefOS configuration file >>  << Message: Re: Red Hat/ClefOS configuration 
file >>  << Message: Re: Red Hat/ClefOS configuration file >>  << Message: Re: 
Red Hat/ClefOS configuration file >>  << Message: Re: Red Hat/ClefOS 
configuration file >>  << Message: Re: Red Hat/ClefOS configuration file >>  << 
Message: Re: Red Hat/ClefOS configuration file >>  << Message: Re: z/Linux 
32-bit modules >>  << Message: Unsubscribe >>


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: 7.5 package levels

2018-05-23 Thread R P Herrold
On Wed, 23 May 2018, Timothy Sipples wrote:

> There's a new dual build/delivery approach that Red Hat has
> introduced with RHEL 7.5. RHEL 7.5 offers an alternate build
> stream called "Structure A,"

One reason for Neale's questions in part are that the ClefOS
7.5 build has been being bitten by the two tracks, as we have
tried to spin and test updated ISOs.  The post drop release
notes add color that was not in the Beta release notes

It may turn out that we (ClefOS) need to fork and offer two
variants

-- Russ herrold

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: z/Linux 32-bit modules

2018-05-23 Thread Alan Altmark
On Wednesday, 05/23/2018 at 05:08 GMT, Philipp Kern  
wrote:
> On 2018-05-23 08:57, Paul Edwards wrote:
> > I would think that most ELF32 programs are already
> > able to use the full 4 GiB address space without
> > needing a recompile. malloc() can start returning
> > addresses in the 2 GiB - 4 GiB range.
>
> Traditionally this is untrue on s390 because -m31 produces 31bit code
> that cannot access more than 2GB of virtual memory. Is -m32 even a thing
> on s390x?

No, for the same reasons it doesn't exist in MVS.  Paul is suggesting that 
-m32 be created.

He can make his arguments to the Linux community in the Linux kernel 
mailing list (LKML), but I think they will throw cold water on it.  As 
Dave Rivers rightly points out, the machine simply will not deliver 32-bit 
semantics, and trying to get various system interfaces to fake it wastes 
your time and annoys the pig.

While Paul doesn't need to express the value of AM32 (it's obvious), he 
hasn't been able to express the value of hamstringing an AM64 application 
to pretend to be AM32.  IMO, if an application needs more than 2GB memory, 
then simply doubling the memory is a delaying tactic of dubious value. 
It's obviously growing and will soon need more than 4GB.

Alan Altmark

Senior Managing z/VM and Linux Consultant
IBM Systems Lab Services
IBM Z Delivery Practice
ibm.com/systems/services/labservices
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: z/Linux 32-bit modules

2018-05-23 Thread Philipp Kern

On 2018-05-23 08:57, Paul Edwards wrote:

I would think that most ELF32 programs are already
able to use the full 4 GiB address space without
needing a recompile. malloc() can start returning
addresses in the 2 GiB - 4 GiB range.


Traditionally this is untrue on s390 because -m31 produces 31bit code
that cannot access more than 2GB of virtual memory. Is -m32 even a thing
on s390x?

Kind regards
Philipp Kern

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: z/Linux 32-bit modules

2018-05-23 Thread Dave Rivers
I believe such an approach might break C semantics regarding 
pointer addition?

In a 32-bit address space (where presumably only 32-bits of the
register are used to address a value) the addition of a pointer past-the-end
(or prior to the start) of an addressable object is undefined.   C compilers
(include the Dignus compiler) make use of that to perform some pretty
clever optimizations.Any addressable object in this 32-bit address
space must be within that 4G of space, so to get these optimizations
correct, the compiler/optimizer can safely assume wrap-around at 32-bits.

So, if you are constraining pointers to being 32-bits, you really can’t
address memory outside of that range [0-4G].

But, if you are saying “hey - I’d like to _actually_ have a special 64-bit AMODE
where pointers are really 64-bits, but I’m guaranteed to be able to allocate
(malloc) memory in the first 4G, _and_ I’m guaranteed that my stack comes
from the first 4G, _and_ I’m guaranteed that my program is loaded in
the first 4G”   That’s a different beast… and I don’t think it
fits in the Linux memory model… (where stack addresses, for example,
start pretty high in the virtual address space.)

You might be able to hack-away a Linux kernel to create a hybrid OS
that does this… but I don’t think the Linux developers would accept it
into the canonical sources.

- Dave Rivers -


> On May 23, 2018, at 2:57 AM, Paul Edwards  wrote:
> 
>> 
>> Hi Timothy.
>> 
> 
> Great questions.
> 
> I don't want to use -m64 because that uses the
> 64-bit registers for everything, but I wish to produce
> compact modules using only 32-bit registers and
> pointers.
> 
> I would think that most ELF32 programs are already
> able to use the full 4 GiB address space without
> needing a recompile. malloc() can start returning
> addresses in the 2 GiB - 4 GiB range.
> 
> The only fly in the ointment that I know of is if an
> application program does a negative index
> expecting that to wrap at 32 bits. It would be good
> if compilers can be updated to avoid doing that so
> that programs start becoming naturally capable of
> running as AM64. I think gcc on z/Linux doesn't
> have this problem but I'm not certain.
> 
> Regarding MVS 3.8j, the situation there is different,
> but ideally applications themselves are AMODE
> neutral and the same binary just accepts whatever
> AMODE it was invoked in instead of demanding a
> particular AMODE. That way the module can run
> optimally on any environment.
> 
> 32-bit modules running as AM64 on z/Linux would
> basically be treating the environment as AM-infinity,
> which I think is ideal and this should be the model
> for all architectures. Rather than having a different
> mode like x64 has. I think z/Arch is fundamentally
> superior.
> 
> BFN. Paul.
> 
> 
> 
> Virus-free.
> www.avg.com
> 
> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
> 
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
> 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: z/Linux 32-bit modules

2018-05-23 Thread Paul Edwards
>
> Hi Timothy.
>

Great questions.

I don't want to use -m64 because that uses the
64-bit registers for everything, but I wish to produce
compact modules using only 32-bit registers and
pointers.

I would think that most ELF32 programs are already
able to use the full 4 GiB address space without
needing a recompile. malloc() can start returning
addresses in the 2 GiB - 4 GiB range.

The only fly in the ointment that I know of is if an
application program does a negative index
expecting that to wrap at 32 bits. It would be good
if compilers can be updated to avoid doing that so
that programs start becoming naturally capable of
running as AM64. I think gcc on z/Linux doesn't
have this problem but I'm not certain.

Regarding MVS 3.8j, the situation there is different,
but ideally applications themselves are AMODE
neutral and the same binary just accepts whatever
AMODE it was invoked in instead of demanding a
particular AMODE. That way the module can run
optimally on any environment.

32-bit modules running as AM64 on z/Linux would
basically be treating the environment as AM-infinity,
which I think is ideal and this should be the model
for all architectures. Rather than having a different
mode like x64 has. I think z/Arch is fundamentally
superior.

BFN. Paul.



Virus-free.
www.avg.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: 7.5 package levels

2018-05-23 Thread Timothy Sipples
There's a new dual build/delivery approach that Red Hat has introduced with
RHEL 7.5. RHEL 7.5 offers an alternate build stream called "Structure A,"
which is a Red Hat supported installation with kernel_alt packages. With
Structure A you get more hardware exploitation, especially on IBM z14 and
LinuxONE Emperor II/Rockhopper II machines, and that might or might not
affect the package version answers. The RHEL 7.5 release notes explain this
all pretty well:

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/7.5_release_notes/index

RHEL 7.5 either includes kernel 3.10 or, in the Structure A build, kernel
4.14. Red Hat then backports critical fixes to both kernels as it services
RHEL 7.5.

Did you install the Structure A build, Daniel?


Timothy Sipples
IT Architect Executive, Industry Solutions, IBM Z & LinuxONE,
Multi-Geography
E-Mail: sipp...@sg.ibm.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/