On Fri, Jan 31, 2020 at 06:26:19PM +0100, Reuti wrote:
>
>
> > Am 31.01.2020 um 18:23 schrieb Jerome IBt :
> >
> > Le 31/01/2020 à 10:19, Reuti a écrit :
> >> Hi Jérôme,
> >>
> >> Personally I would prefer to keep the output of `qquota` short and use it
> >> only for users's limits. I.e. defin
On Tue, Jan 21, 2020 at 03:51:01PM +, Skylar Thompson wrote:
> -V strips out PATH and LD_LIBRARY_PATH for security reasons, since prolog
I don't think this is the case. I've just experimented with one of our 8.1.9
clusters and I can set arbitrary PATHs run qsub -V and have the value I set
sh
On Mon, Jan 20, 2020 at 10:12:09PM +, Shiel, Adam wrote:
>Hi,
>
>
>
>I inherited maintaining a small cluster running SGE 8.1.8. I've been away
>from grid engine for a while and am seeing behavior I don't remember from
>our 6.X something system.
>
>
>
>On our ne
On Thu, Nov 14, 2019 at 06:00:27AM +, Manuel Sopena Ballesteros wrote:
>Dear SGE community,
>
>
>
>Is there a way to limit the number of cores used per job like h_vcpu type
>of flag?
>
>If yes, then what would SGE do if let say a user runs something like qsub
>–b mys
On Tue, Oct 29, 2019 at 08:00:27PM +, Mun Johl wrote:
>Hi Hugh,
>
>
>
>Thank you for your reply.
>
>See my comments below.
>
>
>
>What’s the output of ‘qconf -sq long.q’? Are you sure it doesn’t still
>reference the old hostname, maybe within a hostgroup?
>
>
On Wed, Oct 30, 2019 at 11:35:02AM -0600, Jerome wrote:
> Dear all
>
> I've trying to compile deb package of SoGE, using the repo on Gitlab
> "https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.com%2Floveshack%2Fsge.git&data=02%7C01%7C%7C84d5d6edaa654f31dda808d75d60880d%7C1fa
On Wed, Aug 14, 2019 at 05:11:02PM +0200, Nicolas FOURNIALS wrote:
> Hi,
>
> Le 14/08/2019 à 16:35, Andreas Haupt a écrit :
> > Preventing access to the 'wrong' gpu devices by "malicious jobs" is not
> > that easy. An idea could be to e.g. play with device permissions.
>
> That's what we do by ha
On Mon, Aug 26, 2019 at 02:15:22PM +0200, Dietmar Rieder wrote:
> Hi,
>
> may be this is a stupid question, but I'd like to limit the used/usable
> number of cores to the number of slots that were reserved for a job.
>
> We often see that people reserve 1 slot, e.g. "qsub -pe smp 1 [...]"
> but t
We have two different academic departments that have bought licenses for
the same software (COMSOL) each has their own license server serving
licenses for their own users. We currently use the Olesen FlexGrid
(https://github.com/olesenm/flex-grid) to integrate grid engine with
various license manag
On Tue, 2019-05-14 at 10:03 -0400, Feng Zhang wrote:
> looks like your job used a lot of ram:
>
> mem 7.463TBs
> io 70.435GB
> iow 0.000s
> maxvmem 532.004MB
Not really 532MB isn't a lot of memory these days. The mem figure is
in TerraByte Seconds which accumulat
On Wed, Jan 30, 2019 at 02:31:01PM +, Kandalaft, Iyad (AAFC/AAC) wrote:
> Maybe I'm mistaken but I believe you need to leave at least 1 entry from the
> previous file to retain the last job number. Otherwise, it restarts from 1.
I suspect you are. The jobseqnum file serves the purpose of tr
On Wed, Jan 09, 2019 at 01:25:12PM +, Kandalaft, Iyad (AAFC/AAC) wrote:
> Thanks for the suggestion Reuti. Since we don't implement consumable
> resources for cores/CPUs, we rely on slots to limit the number of jobs per
> node. Can you recommend an accepted method or best practice where we
On Wed, Dec 05, 2018 at 03:29:23PM -0300, Dimar Jaime Gonz??lez Soto wrote:
>the app site is https://omabrowser.org/standalone/ I tried to make a
>parallel environment but it didn't work.
The website indicates that an array job should work for this.
Has the load average spiked to the point
On Tue, Nov 13, 2018 at 05:06:51PM -0700, ad...@genome.arizona.edu wrote:
> We have a cluster with gridengine 6.5u2 and noticing a strange behavior when
> running MPI jobs. Our application will finish, yet the processes continue
> to run and use up the CPU. We did configure a parallel environment
14 matches
Mail list logo