Hi
Currently i am approaching a similar problem/workflow with spack and an AWS
S3 shared storage. Mounting the storage from a laptop gives you same layout
as on each node of my AWC EC2 cluster.

As others mentioned before: you still have to recompile your work, to take
advantage of the XEON class cpu-s.

This should not be a problem as SLURM can distribute your appilication;
however on a proper cluster you already have a parallel filesystem running
so all nodes can run the mpi application.

Steve

On Fri., Jul. 24, 2020, 13:00 Lana Deere via users, <
users@lists.open-mpi.org> wrote:

> I have open-mpi 4.0.4 installed on my desktop and my small test programs
> are working.
>
> I would like to migrate the open-mpi to a cluster and run a larger program
> there.  When moved, the open-mpi installation is in a different pathname
> than it was on my desktop and it doesn't seem to work any longer.  I can
> make the libraries visible via LD_LIBRARY_PATH but this seems
> insufficient.  Is there an environment variable which can be used to tell
> the open-mpi where it is installed?
>
> Is it mandatory to actually compile the release in the ultimate
> destination on each system where it will be used?
>
> Thanks.
>
> .. Lana (lana.de...@gmail.com)
>
>
>

Reply via email to