Heard from Richard that a fix has now been found for this problem. Thanks for tracking it down:
>> The cause has been found. In your client_state.xml file, you'll find a section like this: >> >> [code]<time_stats> >> <on_frac>0.998909</on_frac> >> <connected_frac>1.000000</connected_frac> >> <active_frac>0.999975</active_frac> >> <gpu_active_frac>0.174423</gpu_active_frac> >> <last_update>1288914349.413624</last_update> >> </time_stats>[/code] >> If you change that <gpu_active_frac> to 1.000000, things will return to normal. That should be quicker than reverting to v6.10.58 Thanks/Ed On Mon, Nov 1, 2010 at 3:30 PM, David Anderson <da...@ssl.berkeley.edu>wrote: > Please set <work_fetch_debug> and send a few hours of event log. > -- David > > > On 01-Nov-2010 11:50 AM, Ed A wrote: > >> While 6.12.4 is a big improvement over 6.11.x and 6.12.2, there is still a >> large >> nagging problem that surfaced in 6.11.7 and is still unfixed. In order to >> maintain a consistent queue in GPU projects the "Additional work buffer" >> has to >> be continuously increased. The only way I've found to reset this is to >> reinstall 6.11.6 or earlier and then install 6.12.4 again. As an example >> on one >> machine my 12 hour Collatz queue is up to an "Additional work buffer" of >> 5.77 >> days and increasing at the rate of ~0.5 days/calendar day in order to >> maintain >> a constant level. Other GPU project queues act similarly but not >> necessarily at >> as fast a rate (may be related to queue size). CPU projects seem to be >> unaffected by this queue shrinkage. >> >> BTW, the GPU scheduling changes in 6.12.4 (non-FIFO) are a welcome >> improvement. >> >> Thanks/Ed >> >> >> On Mon, Nov 1, 2010 at 11:03 AM, David Anderson <da...@ssl.berkeley.edu >> <mailto:da...@ssl.berkeley.edu>> wrote: >> >> I want to get 6.12 out the door as soon as we're all happy w/ notices. >> >> -- David >> >> >> _______________________________________________ boinc_dev mailing list boinc_dev@ssl.berkeley.edu http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev To unsubscribe, visit the above URL and (near bottom of page) enter your email address.