Re: [SGE-discuss] Is a WSL windows exec host for SGE possible

2017-05-19 Thread Reuti
Hi, > Am 19.05.2017 um 15:46 schrieb Mukesh Chawla : > > Hi, > > Thanks a lot William and Reuti for the answers. Apparently I scheduled a > .bat (windows bat file) job to check if it could run it and due to that the > all.q got dropped for the queue list. > > qstat

Re: [SGE-discuss] GPUs as a resource

2017-05-19 Thread Marco Donauer
Hi Juan, sure this is right. We always recommend to try to keep the number of queues low and if possible problems, which can be resolved with an other solution shouldn't be resolved via setting up unnecessary queues. Each queue and this is not just a problem with Univa Grid Engine increases the

Re: [SGE-discuss] GPUs as a resource

2017-05-19 Thread juanesteban.jime...@mdc-berlin.de
Me, nothing. My colleagues attended your training and were told not to create individual queues for solving issues like gpu node access. They are using Univa on their cluster, I am not. Mfg, Juan Jimenez System Administrator, HPC MDC Berlin / IT-Dept. Tel.: +49 30 9406 2800

Re: [SGE-discuss] GPUs as a resource

2017-05-19 Thread Marco Donauer
Hi Juan, what have you been told by Univa, regarding GPUs? Regards Marco Am 19. Mai 2017 17:12:42 schrieb "juanesteban.jime...@mdc-berlin.de" : I am just telling you what my colleagues say they were told by Univa. Mfg, Juan Jimenez System Administrator,

Re: [SGE-discuss] GPUs as a resource

2017-05-19 Thread juanesteban.jime...@mdc-berlin.de
I am just telling you what my colleagues say they were told by Univa. Mfg, Juan Jimenez System Administrator, HPC MDC Berlin / IT-Dept. Tel.: +49 30 9406 2800 From: Reuti [re...@staff.uni-marburg.de] Sent: Friday, May 19, 2017 17:01 To: Jimenez, Juan

Re: [SGE-discuss] GPUs as a resource

2017-05-19 Thread juanesteban.jime...@mdc-berlin.de
> It does indeed but not by a whole lot for a queue on a couple of nodes. > Since you want to reserve these nodes for GPU users then the extra queue is > needless. > I suggest: > 1.Make the GPU complex FORCED (so users who don't request a gpu can't end up > on a node with gpus). > 2.Define the

Re: [SGE-discuss] GPUs as a resource

2017-05-19 Thread Reuti
> Am 19.05.2017 um 16:35 schrieb juanesteban.jime...@mdc-berlin.de: > >> You are being told by who or what? If it is a what then the exact message >> is helpful? > > By my colleagues who are running a 2nd cluster using Univa GridEngine. This > was a warning from Univa not to do it that way

Re: [SGE-discuss] GPUs as a resource

2017-05-19 Thread Reuti
> Am 19.05.2017 um 16:52 schrieb juanesteban.jime...@mdc-berlin.de: > > Yes, shared to all nodes, but the owner of the share is user gridengine, > under which the qmaster runs. Permissions on the scripts are 775. This shouldn't block the execution. Can you log in to the node and try to start

Re: [SGE-discuss] GPUs as a resource

2017-05-19 Thread Reuti
Then a boolean complex which is set to FORCED can be used: $ qconf -mc … gpu gpuBOOL== FORCED NO 00 and attach it to the exechost's complex_values: $ qconf -me gpu-node … complex_values gpu=TRUE Users then have to request "-l gpu" (resp.

Re: [SGE-discuss] GPUs as a resource

2017-05-19 Thread juanesteban.jime...@mdc-berlin.de
Yes, shared to all nodes, but the owner of the share is user gridengine, under which the qmaster runs. Permissions on the scripts are 775. Mfg, Juan Jimenez System Administrator, HPC MDC Berlin / IT-Dept. Tel.: +49 30 9406 2800 From: Reuti

Re: [SGE-discuss] GPUs as a resource

2017-05-19 Thread juanesteban.jime...@mdc-berlin.de
> You are being told by who or what? If it is a what then the exact message is > helpful? By my colleagues who are running a 2nd cluster using Univa GridEngine. This was a warning from Univa not to do it that way because it increases qmaster workload Juan

Re: [SGE-discuss] GPUs as a resource

2017-05-19 Thread juanesteban.jime...@mdc-berlin.de
I put them in /opt/sge/default/common/sge-gpuprolog. I think I will just have to put them on the gpu node where the users can get to them. Mfg, Juan Jimenez System Administrator, HPC MDC Berlin / IT-Dept. Tel.: +49 30 9406 2800 From: Reuti

Re: [SGE-discuss] Is a WSL windows exec host for SGE possible

2017-05-19 Thread Mukesh Chawla
Hi, Thanks a lot William and Reuti for the answers. Apparently I scheduled a .bat (windows bat file) job to check if it could run it and due to that the all.q got dropped for the queue list. qstat -explain E gives the following output: - "queue all.q marked QERROR as result of job 2's

Re: [SGE-discuss] GPUs as a resource

2017-05-19 Thread Reuti
Hi, > Am 18.05.2017 um 12:37 schrieb juanesteban.jime...@mdc-berlin.de: > > Ok, so I create a new queue, gpu.q, that only has that node, with the complex > value for the gpu. I removed the node from @allhosts so that the all.q and > interactive.q don’t use the node. I also modified the user

Re: [SGE-discuss] GPUs as a resource

2017-05-19 Thread Reuti
Hi, > Am 18.05.2017 um 14:15 schrieb juanesteban.jime...@mdc-berlin.de: > > I tried it according to the instructions, but it won’t work. The Messages > file for the qmaster says that the scripts are not executable, but I chmod > +x’d both the scripts. Where did you put the scripts? The

Re: [SGE-discuss] Is a WSL windows exec host for SGE possible

2017-05-19 Thread Reuti
Hi, > Am 19.05.2017 um 12:46 schrieb Mukesh Chawla : > > qhost configuration > HOSTNAMEARCH NCPU LOAD > MEMTOT MEMUSESWAPTO SWAPUS >

Re: [SGE-discuss] Is a WSL windows exec host for SGE possible

2017-05-19 Thread Reuti
Hi, > Am 19.05.2017 um 12:04 schrieb Mukesh Chawla : > > I am trying to run a windows exec host attached to a Unix based master > using the new feature WSL(windows shell for linux) introduced in windows > 10. I have successfully set up the grid engine master and

[SGE-discuss] Is a WSL windows exec host for SGE possible

2017-05-19 Thread Mukesh Chawla
Hi, I am trying to run a windows exec host attached to a Unix based master using the new feature WSL(windows shell for linux) introduced in windows 10. I have successfully set up the grid engine master and execution host on it. But when I start submitting the jobs to the exec they just remain in

[SGE-discuss] Exclusion

2017-05-19 Thread juanesteban.jime...@mdc-berlin.de
So, I now have a working gpu.q. However, users in the acl eat up slots even if they have not requested a gpu resource. How do i keep out jobs that do not specifically request a gpu. I only want jobs to run on that queue/node if they want to use one of the two gpu's. thanks! Juan Get Outlook