Am 19.03.2014 um 15:55 schrieb Abhinav Mittal:

> I remember manipulating using qmon as suggested here :-
> http://scidom.wordpress.com/category/software/sge/

AFAICS this setting would bind all slots to core 2 of a machine under Solaris 
(i.e. one core).

In case you have a dual core machine, it's necessary to set slots to two 
instead.


>      Is there a way
> to revert back ?

Yes, set it to UNDEFINED again.

Either in `qmon` or with `qconf -mq New`

-- Reuti


> or I will have to make changes manually  ?
> 
> On Wed, Mar 19, 2014 at 7:22 PM, Reuti <[email protected]> wrote:
>> Please keep the list posted!
>> 
>> Am 19.03.2014 um 14:36 schrieb Abhinav Mittal:
>> 
>>> abhinav@abhnav:~$ qconf -sq New
>>> qname                 New
>>> hostlist              abhnav
>>> seq_no                0
>>> load_thresholds       np_load_avg=1.75
>>> suspend_thresholds    NONE
>>> nsuspend              1
>>> suspend_interval      00:05:00
>>> priority              0
>>> min_cpu_interval      00:05:00
>>> processors            2
>> 
>> Most likely this should be set to UNDEFINED unless you are running one of 
>> the supported OS for this option.
>> 
>> Please have a look at `man queue_conf` for an explanation.
>> 
>> -- Reuti
>> 
>> 
>>> qtype                 BATCH INTERACTIVE
>>> ckpt_list             NONE
>>> pe_list               make
>>> rerun                 FALSE
>>> slots                 1
>>> tmpdir                /tmp
>>> shell                 /bin/csh
>>> prolog                NONE
>>> epilog                NONE
>>> shell_start_mode      posix_compliant
>>> starter_method        NONE
>>> suspend_method        NONE
>>> resume_method         NONE
>>> terminate_method      NONE
>>> notify                00:00:60
>>> owner_list            NONE
>>> user_lists            arusers
>>> xuser_lists           NONE
>>> subordinate_list      NONE
>>> complex_values        NONE
>>> projects              NONE
>>> xprojects             NONE
>>> calendar              NONE
>>> initial_state         default
>>> s_rt                  INFINITY
>>> h_rt                  INFINITY
>>> s_cpu                 INFINITY
>>> h_cpu                 INFINITY
>>> s_fsize               INFINITY
>>> h_fsize               INFINITY
>>> s_data                INFINITY
>>> h_data                INFINITY
>>> s_stack               INFINITY
>>> h_stack               INFINITY
>>> s_core                INFINITY
>>> h_core                INFINITY
>>> s_rss                 INFINITY
>>> h_rss                 INFINITY
>>> s_vmem                INFINITY
>>> h_vmem                INFINITY
>>> 
>>> On Wed, Mar 19, 2014 at 6:54 PM, Reuti <[email protected]> wrote:
>>>> Am 19.03.2014 um 14:02 schrieb Abhinav Mittal:
>>>> 
>>>>> --------------------------------------------------------------------------------------------------------------------------------
>>>>> abhinav@abhnav:~$ qconf -sel
>>>>> abhnav
>>>>> abhinav@abhnav:~$ qconf -sql
>>>>> New
>>>>> abhinav@abhnav:~$ qconf -sconf
>>>>> #global:
>>>>> execd_spool_dir              /var/spool/gridengine/execd
>>>>> mailer                       /usr/bin/mail
>>>>> xterm                        /usr/bin/xterm
>>>>> load_sensor                  none
>>>>> prolog                       none
>>>>> epilog                       none
>>>>> shell_start_mode             posix_compliant
>>>>> login_shells                 bash,sh,ksh,csh,tcsh
>>>>> min_uid                      0
>>>>> min_gid                      0
>>>>> user_lists                   none
>>>>> xuser_lists                  none
>>>>> projects                     none
>>>>> xprojects                    none
>>>>> enforce_project              false
>>>>> enforce_user                 auto
>>>>> load_report_time             00:00:40
>>>>> max_unheard                  00:05:00
>>>>> reschedule_unknown           00:00:00
>>>>> loglevel                     log_warning
>>>> 
>>>> This I suggest to change to:
>>>> 
>>>> loglevel log_info
>>>> 
>>>> 
>>>>> administrator_mail           root
>>>>> set_token_cmd                none
>>>>> pag_cmd                      none
>>>>> token_extend_time            none
>>>>> shepherd_cmd                 none
>>>>> qmaster_params               none
>>>>> execd_params                 none
>>>>> reporting_params             accounting=true reporting=false \
>>>>>                           flush_time=00:00:15 joblog=false 
>>>>> sharelog=00:00:00
>>>>> finished_jobs                100
>>>>> gid_range                    65400-65500
>>>>> max_aj_instances             2000
>>>>> max_aj_tasks                 75000
>>>>> max_u_jobs                   0
>>>>> max_jobs                     0
>>>>> auto_user_oticket            0
>>>>> auto_user_fshare             0
>>>>> auto_user_default_project    none
>>>>> auto_user_delete_time        86400
>>>>> delegated_file_staging       false
>>>>> reprioritize                 0
>>>>> rlogin_daemon                /usr/sbin/sshd -i
>>>>> rlogin_command               /usr/bin/ssh
>>>>> qlogin_daemon                /usr/sbin/sshd -i
>>>>> qlogin_command               /usr/share/gridengine/qlogin-wrapper
>>>>> rsh_daemon                   /usr/sbin/sshd -i
>>>>> rsh_command                  /usr/bin/ssh
>>>> 
>>>> Did you set this up? The default is to use the builtin tools for the 
>>>> commands above.
>>>> 
>>>> 
>>>>> jsv_url                      none
>>>>> jsv_allowed_mod              ac,h,i,e,o,j,M,N,p,w
>>>> 
>>>> Fine.
>>>> 
>>>> 
>>>>> abhinav@abhnav:~$ qstat -f
>>>>> queuename                      qtype resv/used/tot. load_avg arch
>>>>>  states
>>>>> ---------------------------------------------------------------------------------
>>>>> New@abhnav                     BIP   0/0/1          0.68     lx26-amd64
>>>>> 
>>>>> ############################################################################
>>>>> - PENDING JOBS - PENDING JOBS - PENDING JOBS - PENDING JOBS - PENDING JOBS
>>>>> ############################################################################
>>>>>    3 0.75000 emt0.0.0.t abhinav      qw    03/16/2014 00:54:30     1
>>>>>    5 0.75000 emt0.0.0.t abhinav      qw    03/16/2014 07:44:02     1
>>>>>    6 0.75000 emt0.0.0.t abhinav      qw    03/16/2014 07:46:05     1
>>>>>    7 0.75000 emt0.0.0.t abhinav      qw    03/18/2014 22:16:49     1
>>>>>    9 0.75000 emt0.0.0.t abhinav      qw    03/18/2014 22:23:41     1
>>>>>   10 0.75000 emt0.0.0.t abhinav      qw    03/18/2014 22:25:11     1
>>>>>   11 0.75000 emt0.0.0.t abhinav      qw    03/18/2014 22:27:40     1
>>>>>   12 0.75000 emt0.0.0.t abhinav      qw    03/18/2014 22:47:21     1
>>>>>   13 0.75000 emt0.0.0.t abhinav      qw    03/18/2014 23:14:14     1
>>>>>   14 0.75000 emt0.0.0.t abhinav      qw    03/18/2014 23:14:48     1
>>>>>   15 0.75000 emt0.0.0.t abhinav      qw    03/19/2014 16:06:01     1
>>>>>   16 0.25000 script.sh  abhinav      qw    03/19/2014 17:03:34     1
>>>>>   17 0.25000 script.sh  abhinav      qw    03/19/2014 17:04:14     1
>>>>>   18 0.25000 script.sh  abhinav      qw    03/19/2014 17:04:54     1
>>>>>   19 0.25000 script.sh  abhinav      qw    03/19/2014 17:07:08     1
>>>> 
>>>> So, what does the queue look like:
>>>> 
>>>> $ qconf -sq New
>>>> 
>>>> -- Reuti
>>>> 
>>>> 
>>>>> abhinav@abhnav:~$ hostname
>>>>> abhnav
>>>>> 
>>>>> On Wed, Mar 19, 2014 at 5:24 PM, Reuti <[email protected]> wrote:
>>>>>> 
>>>>>> Am 19.03.2014 um 12:37 schrieb Abhinav Mittal:
>>>>>> 
>>>>>>> Not working
>>>>>>> 
>>>>>>> abhinav@abhnav:~$ qconf -ss
>>>>>>> abhnav
>>>>>>> localhost
>>>>>>> abhinav@abhnav:~$ hostname
>>>>>>> abhnav
>>>>>>> abhinav@abhnav:~$ qsub script.sh
>>>>>>> Unable to run job: warning: abhinav your job is not allowed to run in 
>>>>>>> any queue
>>>>>>> Your job 18 ("script.sh") has been submitted.
>>>>>>> Exiting.
>>>>>>> 
>>>>>>> Same for qsub -b y as well
>>>>>>> 
>>>>>>> On Wed, Mar 19, 2014 at 4:30 PM, Reuti <[email protected]> 
>>>>>>> wrote:
>>>>>>>> Hi,
>>>>>>>> 
>>>>>>>> Am 19.03.2014 um 11:45 schrieb Abhinav Mittal:
>>>>>>>> 
>>>>>>>>> I am trying to run a software called
>>>>>>>>> "Segway"(http://noble.gs.washington.edu/proj/segway/doc/1.1.0/segway.html)
>>>>>>>> 
>>>>>>>> Before looking into any application specific problems: is a simple 
>>>>>>>> script echo'ing "Hello World" working? Can you submit a binary with 
>>>>>>>> `qsub -b y hostname` too?
>>>>>>>> 
>>>>>>>> -- Reuti
>>>>>>>> 
>>>>>>>> 
>>>>>>>>> and getting an error "your job is not allowed to run in any que".
>>>>>>>>> submit host : localhost , abhnav
>>>>>>>>> hostname : abhnav
>>>>>>>>> Still I am getting this error
>>>>>>>>> Please help.
>>>>>>>>> 
>>>>>>>>> -----------------------------------------------------------------------------------------------------------------
>>>>>>>>> 
>>>>>>>>> abhinav@abhnav:~$ segway --num-labels=4 train test.genomedata traindir
>>>>>>>>> traindir/observations/chr21.0000.float32 (9411193, 9595548)
>>>>>>>>> ____ PROGRAM ENDED SUCCESSFULLY WITH STATUS 0 AT Wednesday March 19
>>>>>>>>> 2014, 16:06:01 IST ____
>>>>>>>>> Traceback (most recent call last):
>>>>>>>>> File "/home/abhinav/arch/Linux-x86_64/bin/segway", line 9, in <module>
>>>>>>>>> load_entry_point('segway==1.1.0', 'console_scripts', 'segway')()
>>>>>>>>> File "/home/abhinav/arch/Linux-x86_64/lib/python2.7/segway/run.py",
>>>>>>>>> line 3592, in main
>>>>>>>>> return runner()
>>>>>>>>> File "/home/abhinav/arch/Linux-x86_64/lib/python2.7/segway/run.py",
>>>>>>>>> line 3429, in __call__
>>>>>>>>> self.run(*args, **kwargs)
>>>>>>>>> File "/home/abhinav/arch/Linux-x86_64/lib/python2.7/segway/run.py",
>>>>>>>>> line 3407, in run
>>>>>>>>> self.run_train()
>>>>>>>>> File "/home/abhinav/arch/Linux-x86_64/lib/python2.7/segway/run.py",
>>>>>>>>> line 3038, in run_train
>>>>>>>>> instance_params = run_train_func(num_segs_range)
>>>>>>>>> File "/home/abhinav/arch/Linux-x86_64/lib/python2.7/segway/run.py",
>>>>>>>>> line 3056, in run_train_singlethread
>>>>>>>>> res = [self.run_train_instance()]
>>>>>>>>> File "/home/abhinav/arch/Linux-x86_64/lib/python2.7/segway/run.py",
>>>>>>>>> line 2937, in run_train_instance
>>>>>>>>> self.run_train_round(instance_index, round_index, **kwargs)
>>>>>>>>> File "/home/abhinav/arch/Linux-x86_64/lib/python2.7/segway/run.py",
>>>>>>>>> line 2899, in run_train_round
>>>>>>>>> round_index, **kwargs)
>>>>>>>>> File "/home/abhinav/arch/Linux-x86_64/lib/python2.7/segway/run.py",
>>>>>>>>> line 2770, in queue_train_parallel
>>>>>>>>> res.queue(restartable_job)
>>>>>>>>> File 
>>>>>>>>> "/home/abhinav/arch/Linux-x86_64/lib/python2.7/segway/cluster/__init__.py",
>>>>>>>>> line 174, in queue
>>>>>>>>> self._queue_unconditional(restartable_job)
>>>>>>>>> File 
>>>>>>>>> "/home/abhinav/arch/Linux-x86_64/lib/python2.7/segway/cluster/__init__.py",
>>>>>>>>> line 164, in _queue_unconditional
>>>>>>>>> jobid = restartable_job.run()
>>>>>>>>> File 
>>>>>>>>> "/home/abhinav/arch/Linux-x86_64/lib/python2.7/segway/cluster/__init__.py",
>>>>>>>>> line 116, in run
>>>>>>>>> res = self.session.runJob(job_template)
>>>>>>>>> File 
>>>>>>>>> "/home/abhinav/arch/Linux-x86_64/lib/python2.7/drmaa-0.7.6-py2.7.egg/drmaa/session.py",
>>>>>>>>> line 314, in runJob
>>>>>>>>> c(drmaa_run_job, jid, sizeof(jid), jobTemplate)
>>>>>>>>> File 
>>>>>>>>> "/home/abhinav/arch/Linux-x86_64/lib/python2.7/drmaa-0.7.6-py2.7.egg/drmaa/helpers.py",
>>>>>>>>> line 299, in c
>>>>>>>>> return f(*(args + (error_buffer, sizeof(error_buffer))))
>>>>>>>>> File 
>>>>>>>>> "/home/abhinav/arch/Linux-x86_64/lib/python2.7/drmaa-0.7.6-py2.7.egg/drmaa/errors.py",
>>>>>>>>> line 151, in error_check
>>>>>>>>> raise _ERRORS[code - 1](error_string)
>>>>>>>>> drmaa.errors.DeniedByDrmException: code 17: warning: abhinav your job
>>>>>>>>> is not allowed to run in any queue
>>>>>>>>> Your job 15 ("emt0.0.0.traindir.43308e4eaf5211e3a4741803736f5e43") has
>>>>>>>>> been submitted
>>>>>>>>> abhinav@abhnav:~$ hostname
>>>>>>>>> abhnav
>>>>>>>>> abhinav@abhnav:~$ qconf -ss
>>>>>>>>> abhnav
>>>>>>>>> localhost
>>>>>>>>> _______________________________________________
>>>>>>>>> users mailing list
>>>>>>>>> [email protected]
>>>>>>>>> https://gridengine.org/mailman/listinfo/users
>>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> _______________________________________________
>>>>>> users mailing list
>>>>>> [email protected]
>>>>>> https://gridengine.org/mailman/listinfo/users
>>>> 
>> 


_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to