Hello all!

I'm seeing more and more news about new exotic silicon pushing code closer 
to the data, and I was wondering what was the future of JVMs in all this.

As an example: Upmem <https://www.upmem.com/technology/>is starting to 
offer some RAM with compute capabilities. They claim 2TB/s of 
RAM-compute-RAM bandwidth for a 128GB set. The low level API seems to be an 
LLVM backend and code 
<https://github.com/upmem/dpu_demo/tree/sdk-2019.3/checksum>close to a map 
operation: for each chunk of computing ram, send a tasklet for a local 
computation.

I have lots of questions:

* Are Java and the JVMs suited to make good use of that hardware?
* What's the mechanically sympathetical API to it? Are map operations in 
Parallel Streams a good abstraction?
* Surely the results must be stored locally, within each RAM chunk. What 
would automated memory management look like with this? Is it per RAM chunk, 
can it be global? Is there a need for rebalancing/shuffling data between 
RAM chunks?
* Are we going to see it at all on JVMs? What's the integration cost? Do 
you translate Java bytecode to LLVM to use their backend? Can it be done 
through the upcoming Vector API?

Cheers
Ben

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web, visit 
https://groups.google.com/d/msgid/mechanical-sympathy/64a4a28e-8db6-41c9-920b-a17491975191%40googlegroups.com.

Reply via email to