I changed config_aux.xml limit to have the <per_proc/> option set.

I'm not entirely surprised it's broken.  There are two incompatible places
to set these options.


On Mon, May 12, 2014 at 1:28 PM, Stephen Maclagan <
[email protected]> wrote:

> Has this changeset just been applied to Setiathome? or have they they
> changed their limits some other way?
>
> Up to today there was a 100 CPU and 100 GPU task limit in place, today
> after maintenance there now seems to be a 100 CPU and 100 per GPU task
> limit in place,
> ie with a CPU and GPU before you could have a maximum of 100 CPU & 100 GPU
> tasks, now it you get 100 GPU tasks per GPU,
> But my i5-3210M/GT650/Intel_Graphics_HD4000 instead of getting 100 Intel
> GPU tasks, got an extra 100 Nvidia GPU tasks,
> So it is possible this changeset is broken (if setiathome has applied it)
>
> http://setiathome.berkeley.edu/forum_thread.php?id=74756
>
>
> http://setiathome.berkeley.edu/results.php?hostid=7054027&offset=0&show_names=0&state=0&appid=0
>
> Claggy
>
> > Date: Sat, 8 Mar 2014 11:22:17 -0800
> > From: [email protected]
> > To: [email protected]
> > Subject: Re: [boinc_dev] Scheduler Enhancement Request
> >
> > I checked in changes to the scheduler so that job limits are enforced
> > per GPU type.
> > I.e. if max_wus_in_progress_gpu is 10,
> > a host can have up to 10 NVIDIA jobs and 10 AMD jobs in progress.
> >
> > I didn't test this thoroughly.
> > Please let me know if you find any problems with it.
> >
> > -- David
> >
> > On 07-Mar-2014 8:23 AM, Jon Sonntag wrote:
> > > At present, if a host has GPUs from multiple vendors installed, it will
> > > request work from both vendors.  If the client has a cache set such
> that
> > > the it is larger than twice the max_wus_in_progress_gpu setting, the
> user
> > > will only get tasks for one GPU.  The other will remain idle.
> > >
> > > In the example below, the server is set to use max_wus_in_progress_gpu
> of
> > > 60.  The host has 120 WUs downloaded already but all 120 are for CUDA.
>  So,
> > > even though the AMD GPU is idle, it won't download any work.
> > >
> > > Collatz Conjecture | 3/7/2014 10:22:43 AM | Sending scheduler request:
> > > Requested by user.
> > > Collatz Conjecture | 3/7/2014 10:22:43 AM | Requesting new tasks for
> NVIDIA
> > > and ATI
> > > Collatz Conjecture | 3/7/2014 10:22:43 AM | [sched_op] CPU work
> request:
> > > 0.00 seconds; 0.00 devices
> > > Collatz Conjecture | 3/7/2014 10:22:43 AM | [sched_op] NVIDIA work
> request:
> > > 14315.74 seconds; 0.00 devices
> > > Collatz Conjecture | 3/7/2014 10:22:43 AM | [sched_op] ATI work
> request:
> > > 86400.00 seconds; 1.00 devices
> > > Collatz Conjecture | 3/7/2014 10:22:47 AM | Scheduler request
> completed:
> > > got 0 new tasks
> > >
> > > As a temporary "fix" I've increased the max_wus_in_progress_gpu and
> also
> > > told the user to reduce his cache.  A permanent fix would be to
> enforce the
> > > max_wus_in_progress_gpu by GPU type so that if a user has two nVidia
> and
> > > one AMD GPU in a host, he would get 66% CUDA and 33% AMD WUs so that no
> > > device would be left idle.
> > > _______________________________________________
> > > boinc_dev mailing list
> > > [email protected]
> > > http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
> > > To unsubscribe, visit the above URL and
> > > (near bottom of page) enter your email address.
> > >
> > _______________________________________________
> > boinc_dev mailing list
> > [email protected]
> > http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
> > To unsubscribe, visit the above URL and
> > (near bottom of page) enter your email address.
>
> _______________________________________________
> boinc_dev mailing list
> [email protected]
> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
> To unsubscribe, visit the above URL and
> (near bottom of page) enter your email address.
>
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to