Mike Renfro wrote:
> Prakash Velayutham wrote:
>> when a user requests that software, he should be given both the CPUs in
>> the node regardless of what he requests, as the software takes up a
>> lot of RAM (almost 6 - 7
>> GB). How can I make this condition work?
>
> Two things should handle it:
>
> 1. In torque, set a default amount of memory requested for a normal
> job. It doesn't have to be realistic, but it can't be tiny, either.
> 1-2 GB, perhaps?
>
> 2. On your memory hog jobs, enter '#PBS -l mem=7168mb' or similar into
> his job files, replacing 7168mb with slightly less memory than
> 'pbsnodes' reports free when the node is idle.
>
> Combining those two should allow two normal jobs to run without
> problems, even if they get substantially larger than 1gb each. But
> your hog jobs will request nearly all the memory on the system, and
> jobs that default to requesting 1 GB of memory will get pushed to
> other systems.
>
> Granted, if someone had a tiny memory job, they could also explicitly
> request the 50 MB or whatever it might require and sneak onto the
> hog's system. But if they're estimating their needs accurately, and
> your hog job is really only using one CPU, there's no harm there.
Thanks Mike. Any idea about maui's ability with GRES and software
license accounting etc?

Thanks,
Prakash
_______________________________________________
mauiusers mailing list
[email protected]
http://www.supercluster.org/mailman/listinfo/mauiusers

Reply via email to