Hi,

Am 11.06.2014 um 20:44 schrieb VG:

> Thanks for the information. So when I qlogin I am assigned a compute node 
> with 1 slot/core(I didn't use the -pe  threaded option for more cores/slots).
> 
> The cluster assigns me a node with 8 cores of which I have been assigned 1 
> core/slot.

Correct. And it would be best not to challenge the admin but stay in the 
granted limits.


> I did qstat -F and got some info for that node which is as follows
> 
> [email protected]     IP    0/7/8          0.04     lx26-amd64    
>         hl:arch=lx26-amd64
>         hl:num_proc=16
>         hl:mem_total=60.980G
>         hl:swap_total=996.207M
>         hl:virtual_total=61.953G
>         hl:load_avg=0.040000
>         hl:load_short=0.040000
>         hl:load_medium=0.040000
>         hl:load_long=0.000000
>         hl:mem_free=59.794G
>         hl:swap_free=4.000K
>         hl:virtual_free=59.794G
>         hl:mem_used=1.186G
>         hl:swap_used=996.203M
>         hl:virtual_used=2.159G
>         hl:cpu=0.300000
>         hl:m_topology=SCCCCSCCCCSCCCCSCCCC
>         hl:m_topology_inuse=SCCCCSCCCCSCCCCSCCCC
>         hl:m_socket=4
>         hl:m_core=16
>         hl:np_load_avg=0.002500
>         hl:np_load_short=0.002500
>         hl:np_load_medium=0.002500
>         hl:np_load_long=0.000000
>         qf:qname=login
>         qf:hostname=compute-0-1005.local
>         qc:slots=1
>         qf:tmpdir=/tmp
>         qf:seq_no=0
>         qf:rerun=0.000000
>         qf:calendar=NONE
>         qf:s_rt=infinity
>         qf:h_rt=infinity
>         qf:s_cpu=infinity
>         qf:h_cpu=infinity
>         qf:s_fsize=infinity
>         qf:h_fsize=infinity
>         qf:s_data=infinity
>         qf:h_data=infinity
>         qf:s_stack=infinity
>         qf:h_stack=infinity
>         qf:s_core=infinity
>         qf:h_core=infinity
>         qf:s_rss=infinity
>         qf:h_rss=infinity
>         qf:s_vmem=infinity
>         qf:h_vmem=infinity
>         qf:min_cpu_interval=00:05:00
> 6141546 0.50500 QLOGIN     vgupta12     r     06/11/2014 12:00:27     1     
> 
> Only qlogin will place me in one of the slots/cores out of total 8 cores 
> here. What I want to know is how much ram is available for me and how much 
> and how many processor. But i think the node is shared always between cores 
> so I have to specify some thing like this qloogin -pe threaded 2 -l mf=4G 
> num_proc=4 and then the cluster will put me on a particular node with the 
> resources I demanded. Is it correct??

Yep. There are several ways to set up a request for memory. Ask the admin which 
way he chose. It can be mem_free, virtual_free or h_vmem. Each has certain 
advantages and side effects.

-- Reuti


> 
> Regards
> Varun
> 
> 
> On Wed, Jun 11, 2014 at 2:16 PM, Reuti <[email protected]> wrote:
> Hi,
> 
> Am 11.06.2014 um 17:34 schrieb VG:
> 
> > I am working on SGE.
> > When I ssh to connect to cluster it connects me to the head node. Then I 
> > use qlogin to get onto one of the compute nodes/ or lets say one of the 
> > host machine.
> >
> > To my understanding I think when I qlogin, the compute node which is 
> > provided to me , I can use all the resources that machine/compute node has. 
> > Is there a way I can find out how many cpu processors are available for 
> > that node and what is the RAM.
> >
> > Hope to hear from you soon.
> 
> No. You are entitled to use the resources you requested only. In case you 
> want to use e.g. more than one core, it should be requested when you issue 
> `qlogin` by requesting a PE. Depending on the Version of SGE, the limit might 
> even be set on a kernel level, and all of your processes will run on one core 
> only, leaving the other ones free for other jobs otherwise.
> 
> Memory can for example be set by "-l h_vmem" and checked by `ulimit -Hv` (the 
> request of "h_vmem" can be set up to be mandatory by the admin).
> 
> -- Reuti
> 
> 


_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to