I just wrote a wrapper around open mpi in Go:  https://github.com/emer/empi

Also, here's a set of random go bindings I found:
        • https://github.com/yoo/go-mpihttps://github.com/marcusthierfelder/mpihttps://github.com/JohannWeging/go-mpi
Even a from scratch implementation:

        • https://github.com/btracey/mpi
Also a few libraries for using infiniband directly:

        • https://github.com/Mellanox/rdmamaphttps://github.com/jsgilmore/ibhttps://github.com/Mellanox/libvma
Some discussion: https://groups.google.com/forum/#!topic/golang-nuts/t7Vjpfu0sjQ

- Randy

> On Sep 14, 2020, at 3:25 AM, Samuel Lampa <samuel.la...@gmail.com> wrote:
> 
> On Thursday, June 11, 2015 at 2:53:36 PM UTC+2 Egon wrote:
> 1. In which cases a cluster of say 4 (or 10 or 100 for instance) Raspberry Pi 
> mini computers can be more cost-effective than a single computer with the 
> same amount of cores (does the cost of communicating the data between the 
> computers via the network not outweigh the fact that they car run tasks 
> simultaneously) ?
> 
> The general answer is Amdahl's Law 
> (http://en.wikipedia.org/wiki/Amdahl%27s_law), of course it's not always 
> applicable 
> (http://www.futurechips.org/thoughts-for-researchers/parallel-programming-gene-amdahl-said.html).
>  When moving things to multiple-computers you'll get a larger overhead in 
> communication when compared to a single-computer, at the same time you may 
> reduce resource-contention for disk, RAM (or other resources). So depending 
> where your bottlenecks are, it could go either way...
> 
> Yes, and also note that super-computers often use special network 
> protocols/technologies which support so called "Remote direct memory access" 
> (RDMA) [1], such as Infiniband [2], to get acceptable performance for 
> high-performance multi-core computations across compute nodes. Infiniband 
> cards are pretty expensive as far as I know, so will probably outweigh the 
> benefits of buying a lot of RPis.
> 
> I'd still be interested to hear if anybody knows about new developments on 
> MPI for Go (for HPC use cases if nothing else)? :)
> 
> [1] https://en.wikipedia.org/wiki/Remote_direct_memory_access
> [2] https://en.wikipedia.org/wiki/InfiniBand
> 
> Best
> Samuel
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/golang-nuts/d1d39602-e48e-4c2e-909b-a85d0d7e81ban%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/2B3CAD1A-234F-4C09-BC21-24D6E01B87DA%40gmail.com.

Reply via email to