Am 19.06.2014 um 03:08 schrieb Sangmin Park:
Hi,
Do you mean that I have to compile SGE? Doesn't it remove all log data that
was generated before?
The aacounting file? No. But the memory of the share-tree-usage will be gone.
If I have to do, I would.
Or make the changes to PAM, then
Hi,
Do you mean that I have to compile SGE? Doesn't it remove all log data that
was generated before?
If I have to do, I would.
And the reason why the load is 12 even thought no slots is that we have
several queue.
all.q does not allowed, but the other queue can. This users used another
queue.
Hi,
Am 17.06.2014 um 03:51 schrieb Sangmin Park:
It looks like okay. But, the usage reporting still does not work.
This is the 'ps -e f' result.
11151 ?Sl 0:14 /opt/sge/bin/lx24-amd64/sge_execd
16851 ?S 0:00 \_ sge_shepherd-46865 -bg
16877 ?Ss 0:00 |
Am 16.06.2014 um 03:43 schrieb Sangmin Park:
Hi,
'ps -e f' displays nice process tree. thanks, mate!
and sge_machinefile_21431 file contain information about computing node and
number of cores which a job is running.
sge_machinefile_21431 just contains a line including lion07:12.
Hi,
It looks like okay. But, the usage reporting still does not work.
This is the 'ps -e f' result.
11151 ?Sl 0:14 /opt/sge/bin/lx24-amd64/sge_execd
16851 ?S 0:00 \_ sge_shepherd-46865 -bg
16877 ?Ss 0:00 | \_ bash
Hi,
'ps -e f' displays nice process tree. thanks, mate!
and sge_machinefile_21431 file contain information about computing node
and number of cores which a job is running.
sge_machinefile_21431 just contains a line including lion07:12.
'qstat -f' shows just short name with queuename as below.
Am 13.06.2014 um 04:30 schrieb Sangmin Park:
Yes, both files, 'mpiexec' and 'mpiexec.hydra', are in the bin directory
inside Intel MPI.
But, 'mpiexec' file is linked to 'mpiexec.py' file.
Does it okay if I create a symbolic link 'mpiexec' pointing to
'mpiexec.hydra' instead of 'mpiexec.py'
Am 13.06.2014 um 06:50 schrieb Sangmin Park:
Hi,
I've checked his job when it's running.
I've checked it via 'ps -ef' command and found that his job is using
mpiexec.hydra.
Putting a blank between -e and f will give a nice process tree.
And 'qrsh' is using '-inherit' option. Here's
Am 12.06.2014 um 04:23 schrieb Sangmin Park:
I've checked the version of Intel MPI. He uses Intel MPI 4.0.3.008 version.
Our system uses rsh to access computing nodes. SGE doses, too.
Please let me know how to cehck which one is used 'mpiexec.hydry' or
'mpiexec'.
Do you have both files
Hi,
Yes, both files, 'mpiexec' and 'mpiexec.hydra', are in the bin directory
inside Intel MPI.
But, 'mpiexec' file is linked to 'mpiexec.py' file.
Does it okay if I create a symbolic link 'mpiexec' pointing to
'mpiexec.hydra' instead of 'mpiexec.py' ?
Since there are several running jobs on the
Hi,
I've checked his job when it's running.
I've checked it via 'ps -ef' command and found that his job is using
mpiexec.hydra.
And 'qrsh' is using '-inherit' option. Here's details.
p012chm 21424 21398 0 13:20 ?00:00:00 bash
/opt/sge/default/spool/lion07/job_scripts/46651
p012chm
Hi,
Am 11.06.2014 um 02:38 schrieb Sangmin Park:
For the best performance, we recommend users to use 8 cores on a single
particular node, not distributed with multi node.
Before I said, he uses VASP application compiled with Intel MPI. So he uses
Intel MPI now.
Which version of Intel MPI?
Hi,
I've checked the version of Intel MPI. He uses Intel MPI 4.0.3.008 version.
Our system uses rsh to access computing nodes. SGE doses, too.
Please let me know how to cehck which one is used 'mpiexec.hydry' or
'mpiexec'.
Sangmin
On Wed, Jun 11, 2014 at 6:46 PM, Reuti
Hello,
I'm very confused about the output of qacct command.
I thought CPU column time is the best way to measure resource usage by
users through this web page,
https://wiki.duke.edu/display/SCSC/Checking+SGE+Usage
But, I have some situation.
One of users in my institution, actually this user is
Am 10.06.2014 um 08:00 schrieb Sangmin Park:
Hello,
I'm very confused about the output of qacct command.
I thought CPU column time is the best way to measure resource usage by users
through this web page, https://wiki.duke.edu/display/SCSC/Checking+SGE+Usage
But, I have some situation.
Hi,
Am 10.06.2014 um 10:21 schrieb Sangmin Park:
This user does always parallel job using VASP application.
Usually, he uses 8 cores per a job. Lots of this kind of job have been
submitted by the user.
8 cores on a particular node or 8 slots across the cluster? What MPI
implementation does
Hi, Reuti
For the best performance, we recommend users to use 8 cores on a single
particular node, not distributed with multi node.
Before I said, he uses VASP application compiled with Intel MPI. So he uses
Intel MPI now.
--Sangmin
On Tue, Jun 10, 2014 at 5:58 PM, Reuti
17 matches
Mail list logo