On 8/31/2012 6:58 AM, Dave Love wrote:
In the absence of any knowledge about that cluster, that doesn't confirm that it's reported for the specific hosts that the scheduler complained about, just that it's reported for some. Look explicitly at the load parameters from one of the hosts in question. Is mem_free there? Is anything else missing (see load_parameters(5))?

It is indeed there.   Here is one node:

$ qhost -F -h compute-3-1
HOSTNAME                ARCH         NCPU NSOC NCOR NTHR  LOAD MEMTOT  MEMUSE  
SWAPTO  SWAPUS
----------------------------------------------------------------------------------------------
global                  -               -    -    -    -     - -       -       
-       -
compute-3-1             lx-amd64        8    2    8    8  0.01 7.8G  916.7M   
16.6G   13.8M
   hl:arch=lx-amd64
   hl:num_proc=8.000000
   hl:mem_total=7.815G
   hl:swap_total=16.600G
   hl:virtual_total=24.415G
   hl:m_topology=SCCCCSCCCC
   hl:m_socket=2.000000
   hl:m_core=8.000000
   hl:m_thread=8.000000
   hl:load_avg=0.010000
   hl:load_short=0.000000
   hl:load_medium=0.010000
   hl:load_long=0.050000
   hl:mem_free=6.920G
   hl:swap_free=16.587G
   hl:virtual_free=23.507G
   hl:mem_used=916.703M
   hl:swap_used=13.754M
   hl:virtual_used=930.457M
   hl:cpu=0.000000
   hl:m_topology_inuse=SCCCCSCCCC
   hl:cores_in_use=0.000000
   hl:np_load_avg=0.001250
   hl:np_load_short=0.000000
   hl:np_load_medium=0.001250
   hl:np_load_long=0.006250

$ qrsh -q space1@compute-3-1 -l mem_free=1G
error: no suitable queues

It apparently does know about "mem_free" because if I request a non existing resource 
"mem_noexist" it complains:

$ qrsh -q space1@compute-3-1 -l mem_noexist=1G
unknown resource "mem_noexist"

With no resources it works:

$ qrsh -q space1@compute-3-1
Last login: Fri Aug 31 17:48:19 2012 from sys.local
Rocks Compute Node
Rocks 5.4.3 (Viper)
[me@compute-3-1 ~]$

_______________________________________________
users mailing list
users@gridengine.org
https://gridengine.org/mailman/listinfo/users

Reply via email to