On Wed, Aug 24, 2016 at 1:03 AM, Szilárd Páll wrote:
> On Mon, Aug 22, 2016 at 5:36 PM, Albert wrote:
>> Hello Mark:
>>
>> I've recompiled Gromacs without MPI. I run submit the job with the command
>> line you suggested.
>>
>> gmx mdrun -ntomp 10 -v -g test.log -pin on -pinoffset 0 -gpu_id 0 -s
>
On Mon, Aug 22, 2016 at 5:36 PM, Albert wrote:
> Hello Mark:
>
> I've recompiled Gromacs without MPI. I run submit the job with the command
> line you suggested.
>
> gmx mdrun -ntomp 10 -v -g test.log -pin on -pinoffset 0 -gpu_id 0 -s
> test.tpr >& test.info
> gmx mdrun -ntomp 10 -v -g test.log -p
I use command line "top" to check how many CPUs are using.
Each gmx occupied 7.5 CPU.
On 08/23/2016 06:38 PM, Mark Abraham wrote:
Hi,
How did you decide that only 15 cores were being used? What performance did
you observe with only one of the jobs running, vs the performance of both
of t
Hi,
How did you decide that only 15 cores were being used? What performance did
you observe with only one of the jobs running, vs the performance of both
of them while both are running? Please share log files via links to files
on a file sharing service - it's quite tedious and inefficient if we h
Hello Mark:
I've recompiled Gromacs without MPI. I run submit the job with the
command line you suggested.
gmx mdrun -ntomp 10 -v -g test.log -pin on -pinoffset 0 -gpu_id 0 -s
test.tpr >& test.info
gmx mdrun -ntomp 10 -v -g test.log -pin on -pinoffset 10 -gpu_id 1 -s
test.tpr >& test.info
Hi,
It's a bit curious to want to run two 8-thread jobs on a machine with 10
physical cores because you'll get lots of performance imbalance because
some threads must share the same physical core, but I guess it's a free
world. As I suggested the other day,
http://manual.gromacs.org/documentation/
anybody has more suggestions?
thx a lot
On 08/17/2016 09:07 AM, Albert wrote:
Hello:
Here is the information that you asked for.
gmx_mpi mdrun -s 7.tpr -v -g 7.log -c 7.gro -x 7.xtc -ntomp 8
-gpu_id 0 -pin on
Hello:
Here is the information that you asked for.
gmx_mpi mdrun -s 7.tpr -v -g 7.log -c 7.gro -x 7.xtc -ntomp 8 -gpu_id
0 -pin on
-
Most of that copy-pasted info is not what I asked for and overall not
very useful. You have still not shown any log files (or details on the
hardware). Share the *relevant* stuff, please!
--
Szilárd
On Tue, Aug 16, 2016 at 5:07 PM, Albert wrote:
> Hello:
>
> Here is my MDP file:
>
> define
Hello:
Here is my MDP file:
define = -DREST_ON -DSTEP6_4
integrator = md
dt = 0.002
nsteps = 100
nstlog = 1000
nstxout = 0
nstvout = 0
nstfout = 0
nstcalcenerg
Hi,
Without log and hw configs, I it's hard to tell what's happening.
By turning off pinning the OS is free to move threads around and it
will try to ensure cores are utilized. However, by leaving threads
up-pinned you risk taking a significant performance hit. So I'd
recommend that you run with
Hello:
I add additional option to one of the job:
-pinoffset 8
to the command line. But it is still the same.However, if I
remove option "-pin on" from one of the job, 16 CPU were occupied
On 08/16/2016 04:18 PM, Szilárd Páll wrote:
By starting two (piined) runs without an of
By starting two (piined) runs without an offset, both simulations get
placed on the first 8 CPUs. You should offset one of them, e.g. the
second, by using -pinoffset in order to have the two runs use distinct
set of cores.
Cheers,
--
Szilárd
On Tue, Aug 16, 2016 at 3:55 PM, Albert wrote:
> Hell
Hello:
I submitted two Gromacs GPU MD simulation to the same GPU machine with
the identical command line:
gmx_mpi mdrun -s 7.tpr -v -g 7.log -c 7.gro -x 7.xtc -ntomp 8 -gpu_id 0
-pin on >& 7.info
gmx_mpi mdrun -s 7.tpr -v -g 7.log -c 7.gro -x 7.xtc -ntomp 8 -gpu_id 1
-pin on >& 7.info
In p
14 matches
Mail list logo