What are your queue settings for both "Maintain enough tasks for" and "up to an additional"?
-----Original Message----- From: boinc_dev [mailto:[email protected]] On Behalf Of Charles Elliott Sent: Wednesday, February 19, 2014 8:34 PM To: 'Richard Haselgrove'; 'Nicolás Alvarez'; 'William Stilte' Cc: 'BOINC Developers Mailing List' Subject: Re: [boinc_dev] new GUI RPC for getting completed /reported tasks "'pushing' an unreasonable volume of work" does not even begin to describe the situation with Einnstein@Home. SETI@Home was down on a Thursday a few weeks ago, when Tuesdays are the usual day for maintenance. At 5:00 PM EST it did not look like they were going to get back up, and I was out of work units for the GPUs. So I set the default resource share for E@H at 1E-5 and no CPU work units. Unfortunately, and this is my mistake, the "Home" resource share was still at about 5% with both CPU and GPU WUs allowed, and my computers were set to the Home venue. On two computers I unsuspended E@H and allowed more work. Almost instantaneously, I had been allocated over 400 work units on both computers, literally weeks of non-stop computation. That is excessive and unreasonable by any standard. Then, of course, by about 11:00 PM EST S@H was up and running. Since E@H's work units have such short deadlines -- everything was running at high priority (no alternation between projects possible) -- E@H work preempted S@H work, so I let it run all night and then suspended E@H. I did complete a few E@H WUs on Tuesdays when S@H is down. BTW, E@H's work units downloaded in about a half hour, but the application files had single-digit transfer rates, so it was at least an hour before my computers could begin any E@H work. It would be nice to download a few GPU WUs from E@H on Tuesdays when S@H is down, but with the way the server works now, it just is not possible. Also, it would be nice to execute more than one WU per GPU. Charles Elliott -----Original Message----- From: boinc_dev [mailto:[email protected]] On Behalf Of Richard Haselgrove Sent: Wednesday, February 19, 2014 10:51 AM To: Nicolás Alvarez; William Stilte Cc: BOINC Developers Mailing List Subject: Re: [boinc_dev] new GUI RPC for getting completed /reported tasks I've just been reminded of another reason why this would be helpful. Not every project's scheduler/feeder allocates tasks in monotonic order of ResultID. Consider http://einstein.phys.uwm.edu/results.php?hostid=4124638 >From page 1 alone, I can see that tasks were allocated at 7:19:40 7:20:54 7:22:00 7:23:05 7:24:12 and the pattern continues on subsequent pages: having the column sortable would help to identify the pattern. Not unnaturally, the excessive work fetch prompted a report by the user on the project's message board: http://einstein.phys.uwm.edu/forum_thread.php?id=10566 - in this case, an experienced user has posted a calm and reasonable question, but in other similar cases, there have been accusatory posts blaming the project for "pushing" an unreasonable volume of work. Unfairly, IMHO. My preliminary assessment is that this is another manifestation of the problem I reported on the boinc_alpha mailing list in May last year, under the title "v7.0.64 - repeated and excessive work fetch": it's the client which initiates the work fetch, again and again and again, despite the volume of work being downloaded being substantially in excess of the maximum cache requested. We had a lengthy discussion about this bug, and I supplied logs from when I observed the behaviour on my own machine, but I don't recall any changeset declaring the problem fixed (though Jacob Klein did send me a copy of a private reply he'd received from David, suggesting that GPU exclusion code was being used when inappropriate). Unfortunately, the present reporter is using an old version of BOINC (v6.10.60), so his experience doesn't answer the question about a fix - but the column sort on the result pages (to get back on topic) would help with diagnosis. >________________________________ > From: Nicolás Alvarez <[email protected]> >To: William Stilte <[email protected]> >Cc: BOINC Developers Mailing List <[email protected]> >Sent: Sunday, 16 February 2014, 22:37 >Subject: Re: [boinc_dev] new GUI RPC for getting completed /reported >tasks > > >If you want to sort by time then sort by time (once that is made >possible), not by ID. > >In my opinion the numeric IDs shouldn't even be exposed on the website >(of course it's too late for that now). > >2014-02-14 8:25 GMT-03:00 William Stilte <[email protected]>: >> For sake of completeness, I wrote the following back in Spetember: >> >> >> While sorting by task name is useful at times, if you're trying to >> find a particular task, in general you'd want to see the tasks in >> order of task id >> - which amount to ordering by send time/date. >> Would it be possible to get custom column sorting instead (e.g. by >> clicking on the column head)? That would also make it possible to >> sort by device, i.e. find CPU/GPU tasks quickly, something that is sadly missing atm. >> The search function is useful, but would profit from allowing wildcards. >> >> 2014-02-14 11:39 GMT+01:00 Richard Haselgrove >> <[email protected]> >> : >> >>> Having had a few months' experience with this, I wonder if we could >>> fine-tune it, especially for SETI Beta. >>> >>> The search box is wonderful, and ideal for the purpose. >>> >>> But unconditionally sorting tasks by name when names are displayed >>> can actually be a hindrance. >>> >>> SETI Beta deliberately keeps tasks in the database, unpurged, for >>> months or sometimes years. At the same time, like any active Beta >>> project, most attention is focused on recently-modified application versions. >>> >>> This morning, I was trying to assess whether a particular failure >>> mode might be associated with a particular task type (those with >>> ".vlar" in the >>> name) - and I found I couldn't combine "show name" with "recent". >>> >>> The ideal solution would be to have sort links in the column >>> headers, independent of display format - as, I believe, was >>> suggested to you privately at the time. >>> >>> >>> >>> >________________________________ >>> > From: David Anderson <[email protected]> >>> >To: Richard Haselgrove <[email protected]> >>> >Cc: BOINC Developers Mailing List <[email protected]> >>> >Sent: Monday, 9 September 2013, 3:22 >>> >Subject: Re: [boinc_dev] new GUI RPC for getting completed >>> >/reported tasks >>> > >>> > >>> >Good idea! I made the following changes to the web code: >>> > >>> >- if you click "show name" in the task list, tasks are sorted >>> > by name instead of ID >>> > >>> >- you can search for tasks by name. >>> > >>> >This is deployed on SETI@home and SETI@home beta. >>> > >>> >-- David >>> > >>> >On 08-Sep-2013 1:22 PM, Richard Haselgrove wrote: >>> >> This looks very useful, both for application developers/testers, >>> >> and >>> when >>> >> helping volunteers diagnose faults via a project message board. >>> >> >>> >> But either process is slowed down and made harder by a missing >>> >> link in >>> the chain. >>> >> >>> >> The local client knows a task only by its name, as in the list below. >>> >> The project website default display (except at Einstein) displays >>> >> Task >>> IDs for >>> >> results - and always displays tasks in order of ID #, even when >>> displaying names. >>> >> >>> >> It would be most helpful to be able to associate names and Task >>> >> IDs - >>> either by >>> >> passing the Task ID # to the client in the <result> block when >>> allocating the >>> >> task, or by adding a task name search facility to the task >>> >> listing >>> pages on >>> >> project websites. >>> >> >>> >> >>> >>>--------------------------------------------------------------------- >>>----------- >>> >> *From:* David Anderson <[email protected]> >>> >> *To:* BOINC Developers Mailing List >>> >><[email protected]> >>> >> *Sent:* Sunday, 8 September 2013, 20:54 >>> >> *Subject:* [boinc_dev] new GUI RPC for getting completed >>> >>/reported >>> tasks >>> >> >>> >> Fred (developer of BoincTasks) pointed out that it's hard for >>> >>GUIs >>> >> to learn the outcome of completed tasks, >>> >> since the interval from when the task completes to when it's >>> reported >>> >> can be just a few seconds, >>> >> and after this the client forgets about the task. >>> >> >>> >> To address this problem, I did the following: >>> >> >>> >> - the client keeps an in-memory list of tasks that have been >>> reported >>> >> in the last hour. >>> >> For each tasks it stores >>> >> - the project URL >>> >> - the result name >>> >> - the app name >>> >> - the final elapsed time >>> >> - the exit status >>> >> - the time when the task completed >>> >> - the time when the task was reported >>> >> >>> >> - There's a new GUI RPC, get_old_tasks(), which fetches this list. >>> >> >>> >> This feature will be in the next client release. >>> >> >>> >> -- David >>> >> _______________________________________________ >>> >> boinc_dev mailing list >>> >> [email protected] <mailto:[email protected]> >>> >> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev >>> >> To unsubscribe, visit the above URL and >>> >> (near bottom of page) enter your email address. >>> >> >>> >> >>> > >>> > >_______________________________________________ >boinc_dev mailing list >[email protected] >http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev >To unsubscribe, visit the above URL and (near bottom of page) enter >your email address. > > > _______________________________________________ boinc_dev mailing list [email protected] http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev To unsubscribe, visit the above URL and (near bottom of page) enter your email address. _______________________________________________ boinc_dev mailing list [email protected] http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev To unsubscribe, visit the above URL and (near bottom of page) enter your email address. _______________________________________________ boinc_dev mailing list [email protected] http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev To unsubscribe, visit the above URL and (near bottom of page) enter your email address.
