Definitely R.request_secs/R.ninstances IMO, otherwise whichever resource has
the most instances gets all the work whenever the requests are for considerably
more than can be assigned for one request.
I also appreciated Nicolás' example.
--
Joe
On Fri, 23 Mar 2012 18:04:31 -0400, David Anderson <[email protected]>
wrote:
> That's a good example.
> A good policy might be:
>
> given a job J, let
> R = resource for which R.request_secs is greatest
> (or should it be R.request_secs/R.ninstances?)
> and for which J has an app version using R.
> Use the fastest such app version
>
> However, I'm not going to implement this right away.
> This would be a major change to the version-selection code
> (sched_version.cpp),
> which is already quite complex.
>
> -- David
>
>
> On 21-Mar-2012 9:55 PM, Nicolás Alvarez wrote:
>> El 22/03/2012, a las 01:29, David Anderson <[email protected]> escribió:
>>> Josef:
>>> I don't understand the reasoning here.
>>> If there aren't enough jobs to satisfy both the CPU and GPU requests,
>>> isn't it better to send jobs only for GPU?
>>
>> Not if that decision makes the CPU go idle. Suppose my GPU is 3x faster than
>> my
>> CPU, and I ask for two hours of work for each resource, which in a certain
>> project means 2 CPU tasks (taking an hour each) and 6 GPU tasks (taking 20
>> mins
>> each).
>>
>> If the project only has 4 tasks to give out, sending all 4 to the GPU would
>> keep
>> the GPU busy for 1:20 and leave the CPU idle, and all tasks would be done
>> after
>> 1:20. If instead the server sends 3 tasks to the GPU and 1 to the CPU, the
>> GPU
>> and CPU would get 1 hour of work each, and all four tasks would finish in
>> 1:00
>> instead of 1:20.
>>
>> I don't fully understand Josef's scenario, but it seems to involve an
>> existing
>> queue, which certainly makes things more complex. My simple example above
>> assumes we're starting from an empty queue (and of course infinite download
>> speed, etc). I also think it's getting into knapsack-problem territory...
>>
>>> Is there a scenario where the current policy results in
>>> less throughput (i.e. less credit) than some other policy?
>>>
>>> -- David
>>>
>>> On 21-Mar-2012 9:09 PM, Josef W. Segur wrote:
>>>> The current method of choosing which app version is "best" for a given
>>>> task on a
>>>> host is based on the highest projected flops. It seems that quickest
>>>> projected
>>>> turnaround would be better for the project and make more sense to users.
>>>> If the
>>>> work request is for a single resource the choice does not differ, of
>>>> course, but
>>>> for two or more resources it may.
>>>>
>>>> Suppose a host asks for 120000 seconds of CPU work and 20000 seconds of
>>>> NVIDIA
>>>> GPU work. If the host has a quad-core CPU and a single GPU that's roughly
>>>> 30000
>>>> seconds for each CPU. IOW, the host is saying that it would likely start a
>>>> CPU
>>>> task 10000 seconds before a GPU task. That's only part of the turnaround
>>>> time,
>>>> but perhaps enough to base the choice on. That is, the first task would go
>>>> to
>>>> CPU and its estimated time be subtracted from the CPU time request as
>>>> always.
>>>> Then for the next task the balance might have shifted so the GPU gets that
>>>> one.
>>>>
>>>> Particularly when there's some limitation on the number of tasks
>>>> available, this
>>>> method would tend to keep all the host's resources useful to the project.
>>>> With
>>>> the current algorithm I've seen many complaints on SETI@home forums about
>>>> getting GPU work when the CPUs are about to (or have actually) run dry,
>>>> and some
>>>> for the opposite condition. It's basically that when the requests for both
>>>> types
>>>> are more than what's currently available, all assigned tasks go to one
>>>> resource.
>>>> The same resource is chosen on subsequent requests unless the requested
>>>> amount
>>>> of time for that resource is reached.
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.