Although I generally agree with the sentiment of Java's poor track record when it comes to early support for stuff like SIMD or GPGPU, Graal VM looks very promising. Firstly, there's a lot of relatively recent developments happening in the area of optimisations, eg: https://github.com/oracle/graal/pull/1692 Secondly, Graal VM has matured enough to become an official offering from Oracle: https://www.oracle.com/uk/tools/graalvm-enterprise-edition.html There's also freely available community edition with some extra optimisations missing. It looks like the future is not that bleak.
On Tue, Nov 5, 2019 at 2:19 PM Peter Veentjer <[email protected]> wrote: > The track-record of Java isn't very good IMHO. For example: > 1) the Hotspot JVM isn't very good at identifying where SIMD instructions > should be used (Azul Zing with the LLVM backend does a better job). > 2) There is no official integration for GPGPU (General-Purpose computing > on Graphics Processing Units) even though powerful GPU's are everything but > rare. > 3) Support for hardware transactional memory > 4) Hardware accelerated encryption; although Java finally caught up in > Java 11. > 5) Memory layout of objects > 6) record types (to reduce the overhead of objects) > > So based on that I would not be surprised if official integration would be > added very late if added at all. > > On Monday, October 21, 2019 at 1:25:05 PM UTC+3, Benoît Paris wrote: >> >> Hello all! >> >> I'm seeing more and more news about new exotic silicon pushing code >> closer to the data, and I was wondering what was the future of JVMs in all >> this. >> >> As an example: Upmem <https://www.upmem.com/technology/>is starting to >> offer some RAM with compute capabilities. They claim 2TB/s of >> RAM-compute-RAM bandwidth for a 128GB set. The low level API seems to be an >> LLVM backend and code >> <https://github.com/upmem/dpu_demo/tree/sdk-2019.3/checksum>close to a >> map operation: for each chunk of computing ram, send a tasklet for a local >> computation. >> >> I have lots of questions: >> >> * Are Java and the JVMs suited to make good use of that hardware? >> * What's the mechanically sympathetical API to it? Are map operations in >> Parallel Streams a good abstraction? >> * Surely the results must be stored locally, within each RAM chunk. What >> would automated memory management look like with this? Is it per RAM chunk, >> can it be global? Is there a need for rebalancing/shuffling data between >> RAM chunks? >> * Are we going to see it at all on JVMs? What's the integration cost? Do >> you translate Java bytecode to LLVM to use their backend? Can it be done >> through the upcoming Vector API? >> >> Cheers >> Ben >> > -- > You received this message because you are subscribed to the Google Groups > "mechanical-sympathy" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To view this discussion on the web, visit > https://groups.google.com/d/msgid/mechanical-sympathy/db3bcaca-e68a-4085-9123-853c001eb0f6%40googlegroups.com > <https://groups.google.com/d/msgid/mechanical-sympathy/db3bcaca-e68a-4085-9123-853c001eb0f6%40googlegroups.com?utm_medium=email&utm_source=footer> > . > -- You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web, visit https://groups.google.com/d/msgid/mechanical-sympathy/CAHNMKAoAmXdmZ24VPmqHOb06os%2Bafn%2BzCgfyC6n_funT%2BwAeSg%40mail.gmail.com.
