Appear responsible for a problem I've seen on my laptop, where there are 
currently no GPU workunits available from any BOINC project doing the medical 
research I'm most interested in, and overheating problems require me to choose 
between these two options:
 
1.  Run the GPU normally, but use TThrottle to limit the temperatures - which 
slows down the CPU enough that very few CPU workunits return by their 
deadlines, 
even if I tell BOINC to use only one of the two CPU cores.  Therefore, I 
accomplish almost nothing for the medical research projects I'm most interested 
in, and only do much useful work for a second choice of which projects.
 
2.  Set the time between the last use of my keyboard and the time BOINC is 
allowed to use the GPU to a very high value, such as 3600 minutes.  Then, if I 
download at least one GPU workunit from some BOINC project, for a few days I 
can 
enable both CPU cores running at the usual laptop 60% of maximum speed without 
overheating and actually have it do workunits on time for medical research 
projects.  After those few days, I have to decide if I've ever going to let 
that 
GPU workunit run at all.
 
Seen under BOINC 6.10.58 and 6.12.33; I haven't tried 6.13.* or any versions 
between 6.10.58 and 6.12.33 on that laptop.
 
Needs a new work fetch policy that allows requests for more CPU workunits if 
the 
delay before the next request for GPU workunits is long enough to allow those 
requests for CPU workunits to complete properly.
 

> Date: Tue, 2 Aug 2011 15:52:34 +0200
> From: Jorden van der Elst <[email protected]>
> Subject: Re: [boinc_dev] [boinc_alpha] BOINC 6.13.1 released for all
>     platforms
> To: [email protected]
> Cc: [email protected]

> OK,

> So my way of thinking should then be adjusted?
> A one day additional work request is no longer a cache, no longer a
> work buffer?

> This still doesn't explain then why when I ask for CPU work only, I DO
> NOT get any work, while when I ask for CPU work with a GPU work
> request piggy-backing itself onto the work request, I DO get work for
> the CPU.

> It also doesn't explain why, when I do CPU only work requests, I only
> have the 4 tasks in cache that are running on the CPU cores. That as
> soon as one core is about to run dry, that a work request is done to
> another project, and 4 tasks of that project are gotten, regardless of
> their run time (12 hours or 43 minutes).

> It also doesn't explain then why, when I allow CPU work requests with
> a GPU work request piggy-backing onto the CPU, I do get lots of work
> from at least 4 projects, enough to fill a one day cache. That as soon
> as one core is about to run dry, that a further project is asked for
> work and work is gotten in. That then at all times I have 4 tasks per
> project in cache, while only 1 task per project is running.

> All work in cache is only CPU work. I have NO GPU work. I have GPU
> work requests, but none of the projects I run use ATI GPUs.

> I'm just trying to understand what I am seeing.

> On Tue, Aug 2, 2011 at 3:34 PM,  <[email protected]> wrote:
>> The new work fetch policy is to wait until the work buffer is at or below
>> the connect interval before requesting work. ?In your case, that number is
>> 0 seconds. ?It also does not work very well for that as there is time for
>> the request and download that has to be taken into account.
>>
>> A minor modification to the algorithm for determining when to trigger a
>> work fetch might be in order. ?I would propose that the trigger point be
>> changed to max(connect interval, connect interval + extra work / 2); ?This
>> should work OK both for people that have an always on connection and for
>> those that have an intermittent connection.
>>
>> jm7
[snip]
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to