On 2017-06-14 16:38, Erik Helin wrote:
On 06/14/2017 02:21 PM, Severin Gehwolf wrote:
Hi Eric,
On Wed, 2017-06-14 at 13:50 +0200, Erik Helin wrote:
For the fourth patch, fix-zero-build-on-sparc.diff, I'm not so sure. For
example, the following is a bit surprising to me (mostly because I'm not
familiar with zero):
--- a/hotspot/src/share/vm/gc/shared/memset_with_concurrent_readers.hpp
+++ b/hotspot/src/share/vm/gc/shared/memset_with_concurrent_readers.hpp
@@ -37,7 +37,7 @@
// understanding that there may be concurrent readers of that memory.
void memset_with_concurrent_readers(void* to, int value, size_t size);
-#ifdef SPARC
+#if defined(SPARC) && !defined(ZERO)
When this code was written, the intent was clearly to have a specialized
version of this function for SPARC. When writing such code, do we always
have to take into account the zero case with !defined(ZERO)?
As of now, yes I think so. The thing is that Zero is supposed to be
architecture agnostic for the most part. That is, you can build Zero on
x86_64, SPARC, aarch64, etc.
Ok. But if Zero is architecture agnostic, why do we have the directory
hotspot/src/cpu/zero? Sorry, I don't know much about Zero...
Zero is a strange beast. :-&
It behaves partially as a separate architecture, and partially as a "jvm
variant" (like server, client), and partially as a "turn this special
flag on".
Long term, there's probably a bunch of clarity to gain from cleaning
this up.
/Magnus
That
doesn't seem like the right (or a scalable) approach to me.
Agreed. That's how it is at the moment, though.
Severin and/or Roman, do you guys know more about Zero and how this
should work? If I want to write a function that I want to specialize for
e.g. x86-64 or arm, do I always have to take Zero into account? Or
should some other define be used, like #ifdef TARGET_ARCH_sparc?
So the ZERO define can happen regardless of arch. I don't really know
any define which does what you want except #if defined(<ARCH>) &&
!defined(ZERO) perhaps.
Hmm, ok, but for the above code snippet, if we are running with Zero on
Sparc, can't we use the Sparc optimized version of
memset_with_concurrent_readers? Or can't we use Sparc assembly in the
runtime when running with Zero?
Thanks,
Erik
Thanks,
Severin
Thanks,
Erik
On 06/09/2017 12:20 PM, John Paul Adrian Glaubitz wrote:
Hi!
I am currently working on fixing OpenJDK-9 on all non-mainstream
targets available in Debian. For Debian/sparc64, the attached four
patches were necessary to make the build succeed [1].
I know the patches cannot be merged right now, but I'm posting them
anyway in case someone else is interested in using them.
All patches are:
Signed-off-by: John Paul Adrian Glaubitz <glaub...@physik.fu-berlin.de>
I also signed the OCA.
I'm now looking into fixing the builds on alpha (DEC Alpha), armel
(ARMv4T), m68k (680x0), powerpc (PPC32) and sh4 (SuperH/J-Core).
Cheers,
Adrian
[1]
https://buildd.debian.org/status/fetch.php?pkg=openjdk-9&arch=sparc64&ver=9%7Eb170-2&stamp=1496931563&raw=0