Am 26.07.2011 um 23:07 schrieb Carlos Scaloni:
> I paste the diferent outputs! I think it doesn't work fine!
>
> [carlos@proyecto-192 c]$ qsub orden.sh
> Your job 14 ("orden.sh") has been submitted
> [carlos@proyecto-192 c]$ qalter -w p 14
> denied: job "14" does not exist
>
> [carlos@proyecto-192 c]$ qstat -f
> queuename qtype resv/used/tot. load_avg arch
> states
> ---------------------------------------------------------------------------------
> allhosts@proyecto-192 BIP 0/0/1 0.00 lx24-amd64
>
> ############################################################################
> - PENDING JOBS - PENDING JOBS - PENDING JOBS - PENDING JOBS - PENDING JOBS
> ############################################################################
> 14 0.00000 orden.sh carlos qw 07/26/2011 23:05:48 1
>
>
> [carlos@proyecto-192 c]$ qstat -f
> queuename qtype resv/used/tot. load_avg arch
> states
> ---------------------------------------------------------------------------------
> allhosts@proyecto-192 BIP 0/1/1 0.00 lx24-amd64
> 14 0.50000 orden.sh carlos r 07/26/2011 23:05:56 1
So, here the job is running. When it's too fast, you can use a sleep inside the
jobscript to view the process allocation in the process tree.
-- Reuti
>
> [carlos@proyecto-192 c]$ qstat -f
> queuename qtype resv/used/tot. load_avg arch
> states
> ---------------------------------------------------------------------------------
> allhosts@proyecto-192 BIP 0/0/1 0.00 lx24-amd64
>
>
>
>
> 2011/7/26 Reuti <[email protected]>
> Am 26.07.2011 um 20:26 schrieb Carlos Scaloni:
>
>> I only see this:
>>
>> 1585 ? Sl 0:07 /usr/global/sge-6.2u5-bin/bin/lx24-amd64/sge_execd
>> 1716 ? Sl 0:12
>> /usr/global/sge-6.2u5-bin/bin/lx24-amd64/sge_qmaster
>
> Yes, as it's in "qw" state.
>
>
>> I can't see any information related with my script
>>
>> [carlos@proyecto-192 c]$ qsub orden.sh
>> Your job 12 ("orden.sh") has been submitted
>>
>> [carlos@proyecto-192 c]$ qstat
>> job-ID prior name user state submit/start at queue
>> slots ja-task-ID
>> -----------------------------------------------------------------------------------------------------------------
>> 12 0.00000 orden.sh carlos qw 07/26/2011 20:25:22
>> 1
>
> So it's to ivestigate with it's not scheduled. You can use:
>
> $ qalter -w p 12
>
> Maybe it provides some help. Or:
>
> $ qstat -f
>
> to check the state of the queues.
>
> -- Reuti
>
>
>
>> cat orden.sh
>> #!/bin/bash
>> ./hello
>>
>> I don't see any about hello or orden.sh with ps -e f
>>
>> 2011/7/26 Reuti <[email protected]>
>> Am 26.07.2011 um 19:23 schrieb Carlos Scaloni:
>>
>>> Hi friends!
>>>
>>> How can i know if the job ran correctly??
>>
>> You have to login to the granted machine and check with `ps -e f` the
>> process tree.
>>
>> -- Reuti
>>
>>
>>>
>>> 2011/7/20 Carlos Scaloni <[email protected]>
>>> Thanks!!
>>>
>>> I didn't have a queue....
>>>
>>> qconf -aq allhosts
>>> proyecto-192 added "allhosts" to cluster queue list
>>>
>>> qsub orden.sh
>>> Your job 7 ("orden.sh") has been submitted
>>>
>>> [carlos@proyecto-192 c]$ qstat
>>> job-ID prior name user state submit/start at queue
>>> slots ja-task-ID
>>> -----------------------------------------------------------------------------------------------------------------
>>> 7 0.00000 orden.sh carlos qw 07/20/2011 01:31:52
>>> 1
>>> [carlos@proyecto-192 c]$ qstat
>>> job-ID prior name user state submit/start at queue
>>> slots ja-task-ID
>>> -----------------------------------------------------------------------------------------------------------------
>>> 7 0.50000 orden.sh carlos r 07/20/2011 01:31:58
>>> allhosts@proyecto-192 1
>>> [carlos@proyecto-192 c]$ qstat
>>>
>>>
>>> It worked 100% :)
>>>
>>> Thanks !!
>>>
>>>
>>> 2011/7/20 Reuti <[email protected]>
>>> Am 20.07.2011 um 00:41 schrieb Carlos Scaloni:
>>>
>>>> Yes, it is!
>>>>
>>>> qconf -sel
>>>> proyecto-192
>>>
>>> And the other question - your queue definition looks like?
>>>
>>> -- Reuti
>>>
>>>
>>>>
>>>> 2011/7/19 Reuti <[email protected]>
>>>> Hi,
>>>>
>>>> Am 19.07.2011 um 17:44 schrieb Carlos Scaloni:
>>>>
>>>>> Thanks again!!
>>>>>
>>>>> I put the line: source
>>>>> /usr/global/sge-6.2u5-bin/default/common/settings.sh in /etc/profile
>>>>> and it works 100% ;)
>>>>>
>>>>> Thanks
>>>>>
>>>>>
>>>>> Now the problem is when i attempt to submit a job:
>>>>>
>>>>> qsub orden.sh
>>>>> Unable to run job: warning: carlos your job is not allowed to run in any
>>>>> queue
>>>>> Your job 4 ("orden.sh") has been submitted.
>>>>> Exiting.
>>>>>
>>>>> qconf -shgrp @allhosts
>>>>> group_name @allhosts
>>>>> hostlist proyecto-192
>>>>
>>>> did you attach the hostgroup to a queue? proyecto-192 is in the list of
>>>> execution hosts: `qconf -sel`?
>>>>
>>>> -- Reuti
>>>>
>>>>
>>>>>
>>>>>
>>>>>
>>>>> 2011/7/18 Reuti <[email protected]>
>>>>> Am 18.07.2011 um 17:22 schrieb Gerard Henry:
>>>>>
>>>>>> On 07/15/11 12:08 PM, Dave Love wrote:
>>>>>>> Prentice Bisbal<[email protected]> writes:
>>>>>>>
>>>>>>>> On 07/12/2011 06:27 PM, Reuti wrote:
>>>>>>>>> Am 12.07.2011 um 01:27 schrieb Carlos Scaloni:
>>>>>>>>>
>>>>>>>>>> How can i do to execute this when the machine is booting: source
>>>>>>>>>> /usr/global/sge-6.2u5-bin/default/common/settings.sh ?
>>>>>>>>>
>>>>>>>>
>>>>>>>> Why do you even need to do that at boot time?
>>>>>>>
>>>>>>> Well, the equivalent has to happen in the daemon init scripts, typically
>>>>>>> at boot time, but the GE installation scripts should sort that out, so
>>>>>>> yes.
>>>>>>>
>>>>>>> I'm confused what the fundamental problem is here, assuming it's as a
>>>>>>> result of using the inst_sge script and generally the instructions at
>>>>>>> <http://wikis.sun.com/display/gridengine62u5/Installing>. The script,
>>>>>>> at least, can be fixed if we know what the problem is.
>>>>>>
>>>>>> how do you do when modules (http://modules.sourceforge.net/) are
>>>>>> installed on the cluster?
>>>>>> i'm unable to create a modules script to do:
>>>>>> module load sge
>>>>>>
>>>>>> with settings.sh. Anybody here ?
>>>>>
>>>>> As you need it for all users by default, I usually put it in
>>>>> /etc/profile.local in openSUSE. I assume there are similar files in other
>>>>> distributions. This avoids installing it in /etc/skel/.profile to make it
>>>>> available which each new user's set up where they could mess it up.
>>>>>
>>>>> -- Reuti
>>>>>
>>>>>
>>>>>> _______________________________________________
>>>>>> users mailing list
>>>>>> [email protected]
>>>>>> https://gridengine.org/mailman/listinfo/users
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> users mailing list
>>>>> [email protected]
>>>>> https://gridengine.org/mailman/listinfo/users
>>>>>
>>>>
>>>>
>>>
>>>
>>>
>>
>>
>
>
_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users