Hi,
That depends how your mpi is implemented, but what you really want is a
filesystem visible on each node. But since mpirun gmx_mpi mdrun is working,
then it's fine.
Mark
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List
Hi,
I'm wondering, if I use gromacs in cluster environment, do I have to
install gromacs in every nodes (at /usr/local/gromacs in every nodes) ?
or is it enough to install gromacs in one node (example,in head-node) only
?
Regards,
Husen
On Thu, Jun 23, 2016 at 3:41 PM, Mark Abraham
Hi,
The only explanation is that that file is not in fact properly accessible
if rank 0 is placed other than on "compute-node," which means your
organization of file system / slurm / etc. aren't good enough for what
you're doing.
Mark
On Thu, Jun 23, 2016 at 10:15 AM Husen R
Hi,
I still unable to find out the cause of the fatal error.
Previously, gromacs is installed in every nodes. That's the cause Build
time mismatch and Build user mismatch appeared.
Now, Build time mismatch and Build user mismatch issues are solved by
installing Gromacs in shared directory.
I
Hi,
On Thu, Jun 16, 2016 at 12:24 PM Husen R wrote:
> On Thu, Jun 16, 2016 at 4:01 PM, Mark Abraham
> wrote:
>
> > Hi,
> >
> > There's just nothing special about any node at run time.
> >
> > Your script looks like it is building GROMACS fresh each
On Thu, Jun 16, 2016 at 4:01 PM, Mark Abraham
wrote:
> Hi,
>
> There's just nothing special about any node at run time.
>
> Your script looks like it is building GROMACS fresh each time - there's no
> need to do that,
which part of my script ?
I always use this
Hi,
There's just nothing special about any node at run time.
Your script looks like it is building GROMACS fresh each time - there's no
need to do that, but the fact that the node name is showing up in the check
that takes place when the checkpoint is read is not relevant to the problem.
Mark
On Thu, Jun 16, 2016 at 2:32 PM, Mark Abraham
wrote:
> Hi,
>
> On Thu, Jun 16, 2016 at 9:30 AM Husen R wrote:
>
> > Hi,
> >
> > Thank you for your reply !
> >
> > md_test.xtc is exist and writable.
> >
>
> OK, but it needs to be seen that way from the
Hi,
On Thu, Jun 16, 2016 at 9:30 AM Husen R wrote:
> Hi,
>
> Thank you for your reply !
>
> md_test.xtc is exist and writable.
>
OK, but it needs to be seen that way from the set of compute nodes you are
using, and organizing that is up to you and your job scheduler, etc.
>
Hi,
Thank you for your reply !
md_test.xtc is exist and writable.
I tried to restart from checkpoint file by excluding other node than
compute-node and it works.
only '--exclude=compute-node' that produces this error.
is this has the same issue with this thread ?
Hi,
The stuff about different nodes or numbers of nodes doesn't matter - it's
merely an advisory note from mdrun. mdrun failed when it tried to operate
upon md_test.xtc, so perhaps you need to consider whether the file exists,
is writable, etc.
Mark
On Thu, Jun 16, 2016 at 6:48 AM Husen R
this is the rest of the error message..
regards,
Husen
Halting parallel program gmx mdrun on rank 0 out of 16
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
Fatal error in PMPI_Bcast: Unknown error class, error stack:
PMPI_Bcast(1635)..:
Hi all,
I got the following error message when I tried to restart gromacs
simulation from checkpoint file.
I restart the simulation using fewer nodes and processes, and also I
exclude one node using '--exclude=' option (in slurm) for experimental
purpose.
I'm sure fewer nodes and processes are
13 matches
Mail list logo