[go-nuts] Re: Is there a Go native approch to MPI (Message passing interface) ?

2020-09-14 Thread Vitaly
https://nanomsg.org/ aka https://zeromq.org/

среда, 10 июня 2015 г. в 01:40:39 UTC+5, Serge Hulne: 

> Hi, I would like to know if there is a build-in mechanism (or a  typical 
> Go paradigm) to address message passing interfaces.
>
> Go solves the problem of message passing between goroutines using the 
> goroutines / channels construct and is also solves the problem to spread 
> the load over multiple processors when one isolated computer is involved.
>
> Does Go also provide something similar (similar to goroutines / channels 
> or to MPI) to distribute computing load across computers on a network ?
>
> (I am aware that MPI is more targeted at parallel processing and goutines 
> / channels are more targeted at making concurrency easy to formulate, but 
> still).
>
> Serge.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/67641730-dfd0-48af-9a44-f092cc6bcc8bn%40googlegroups.com.


[go-nuts] Re: Is there a Go native approch to MPI (Message passing interface) ?

2020-09-14 Thread Samuel Lampa
On Thursday, June 11, 2015 at 2:53:36 PM UTC+2 Egon wrote:

> 1. In which cases a cluster of say 4 (or 10 or 100 for instance) Raspberry 
>> Pi mini computers can be more cost-effective than a single computer with 
>> the same amount of cores (does the cost of communicating the data between 
>> the computers via the network not outweigh the fact that they car run tasks 
>> simultaneously) ?
>>
>
> The general answer is Amdahl's Law (
> http://en.wikipedia.org/wiki/Amdahl%27s_law), of course it's not always 
> applicable (
> http://www.futurechips.org/thoughts-for-researchers/parallel-programming-gene-amdahl-said.html).
>  
> When moving things to multiple-computers you'll get a larger overhead in 
> communication when compared to a single-computer, at the same time you may 
> reduce resource-contention for disk, RAM (or other resources). So depending 
> where your bottlenecks are, it could go either way...
>

Yes, and also note that super-computers often use special network 
protocols/technologies which support so called "Remote direct memory 
access" (RDMA) [1], such as Infiniband [2], to get acceptable performance 
for high-performance multi-core computations across compute nodes. 
Infiniband cards are pretty expensive as far as I know, so will probably 
outweigh the benefits of buying a lot of RPis.

I'd still be interested to hear if anybody knows about new developments on 
MPI for Go (for HPC use cases if nothing else)? :)

[1] https://en.wikipedia.org/wiki/Remote_direct_memory_access
[2] https://en.wikipedia.org/wiki/InfiniBand

Best
Samuel

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/d1d39602-e48e-4c2e-909b-a85d0d7e81ban%40googlegroups.com.