Well done Paul, I think you claim the prize:
http://boinc.berkeley.edu/trac/changeset/17860
- Original Message -
From: Paul D. Buck p.d.b...@comcast.net
To: David Anderson da...@ssl.berkeley.edu
Cc: john.mcl...@sybase.com; BOINC Developers Mailing List
boinc_dev@ssl.berkeley.edu
Sent:
I'll give a good test when the next build is out, but I won't be able to
reproduce the experimental conditions until Einstein is issuing work
again -
dry on all relevant boxes. So we get to relax a bit over the weekend.
v6.6.24 is installed and new work is downloading from Einstein as I
this been remarked on
before?]
I'll go and preserve the logs, and try and find you the meaningful sections
to upload. More follows.
- Original Message -
From: Richard Haselgrove r.haselgr...@btinternet.com
To: Richard Haselgrove r.haselgr...@btinternet.com; David Anderson
da
Paul D. Buck wrote
Again, the point being that we still have the issue were BOINC
continually changes its mind about what should be running.
One of the causes is that we call the schedulling routine so often. But
that is not all of it. Even if we do throttle the number of calls to
Mikus wrote
I've seen the following (presumably in chase of a missed deadline)
- I forget which release of the BOINC client I was using. [I run
off-line; my interval-between-connects is 1.1 days; my additional
work days are 1.x days (for a total ready-queue size of 2+ days).]
From one
Paul D. Buck wrote
Not sure if I have the for sure smoking gun ... but ...
VERY interesting evidence for you developers to ignore ...
TWO Problems, for the price of one nights work.
1) There seems to be an issue with the way BOINC interacts with the
zip and unzip which may cause BOINC
That sounds like the problem they had at CPDN Beta with big trickle files -
the sched_request size got too big. Not the number of tasks, but the size of
the file.
You might ask someone with the problem how big that file is - someone at
CPDN mentioned 4MB as a limit.
At 12:34 PM +0100 5/19/09, Richard Haselgrove wrote:
The BOINC v6.6 managers (currently .20 and .28) show much more screen
'flicker' when updating data for running tasks than earlier managers -
comparing the current unified task list view with earlier 'Grid' views in
v6.4.5, v6.2.49
While you lot have been chatting (grin), I have opened trac tickets #896
and #897
Does the reporting style (level of detail, sub-headings) seem appropriate?
If so, I'll hang on to the template.
- Original Message -
From: Kathryn Marks kathryn.bo...@gmail.com
To: Eric Myers
Since the circumstances seem to be easier to re-create in the current SETI
configuration (almost out of Astropulse work), would you like me to try for
more [rr-sim] logs - and any other flags you care to suggest?
Richard Haselgrove wrote:
17/04/2009 11:21:51 [wfd] target work buffer
1) Correct to close #869: tested and working (at the third attempt ;-) )
2) Can close #897 - fix in v6.6.31 seems effective. #890 already closed.
3) I don't mind the wontfix on #620 - I though it was a clumsy solution -
but I think David's closure note misses the point.
The client keeps track
, but
offers of help
- Original Message -
From: David Anderson da...@ssl.berkeley.edu
To: Richard Haselgrove r.haselgr...@btinternet.com
Cc: BOINC Developers Mailing List boinc_dev@ssl.berkeley.edu
Sent: Wednesday, June 03, 2009 12:30 AM
Subject: Re: [boinc_alpha] Work fetch bug continues
David Anderson wrote
My inclination would be to simplify the user info box even further,
e.g. to remove the credit stuff,
and replace Joined: Aug 8 08 with Joined: 10 months ago or something
-- David
Please don't. People rather like celebrating their BOINC birthdays,
especially in this
David Anderson wrote
I'll do this if someone sends me an array
that parallels the existing $countries array (see below)
containing the 2-letter country codes used by the FamFamFam images
-- David
A few errors have crept into http://boinc.berkeley.edu/trac/changeset/18390
I used to do this
The only mechanism for changing it is to modify the duration correction
factor. The duration correction factor will correct itself over time
anyway, but it is possible to set it by hand to be closer to where it will
eventually stabilize.
jm7
The problem with relying on DCF is that no
If you have multiple login accounts for the computer, I believe you have to
repeat step (3) (heeding the warning in step (4)) for each separate user
account.
- Original Message -
From: William bcdecbi...@yahoo.com
To: boinc_dev@ssl.berkeley.edu
Sent: Tuesday, July 14, 2009 8:08 PM
FYI
http://tech.slashdot.org/story/09/07/14/1743230/BOINC-Exceeds-2-Petaflops-Barrier
Cheers,
Reinhard.
Most project Server Status pages, such as
http://einstein.phys.uwm.edu/server_status.php, explain that they derive
their FLOP-eqivalence from the sum of the Recent Average Credit (RAC)
That only works if you have the bandwidth to actually get the data to the
validator. The problem at the moment in SETI is the last mile of internet
connection. There are several possible solutions, but having the upload
servers else where does not really help that much. The uploaded data
Advantages
Relatively simple server requirements for the 'data concentrator' - just
cgi, and some filesystem-level cron scripting
Much cheaper than $80,000 for 'fibre up the hill'
Quicker to implement than more esoteric suggestions - I don't think there's
anything above
I didn't persue it last time, because I was using an old version of BOINC, and
saw no reason to upgrade at the time - this was 2008 BC (before CUDA). I did
ask if anyone could confirm my observation with a current BOINC, but answer
came there none.
I am, however, glad that I posted on a
Lynn W. Taylor wrote:
Some sort of daemon on the upload server would then send work back to
the main site as quickly as possible without saturating the link.
... but you're still limited to 80 or 90 megabytes.
It doesn't solve the problem (too many simultaneous connections) it just
moves
You do get some increased throughput if the problem is dropped connections
and packets, and the distributed upload servers have sufficiently better
connections, and the link has to the final upload server has sufficient
bandwidth to handle the load if the connections are carefully controlled
It's on #713 as well - not yet cleaned.
- Original Message -
From: Jorden van der Elst els...@gmail.com
To: Rom Walton r...@romwnet.org
Cc: BOINC dev boinc_dev@ssl.berkeley.edu
Sent: Friday, August 07, 2009 1:01 PM
Subject: Re: [boinc_dev] *SPAM* Re: Spam on Trak tickets
It's back,
When checking this report, please consider other OSs as well. There have
been repeated, but inconclusive, reports from Windows users with multiple
CUDA cards (such as Fred W) of similar behaviour. In Windows, it sounds as
if this tasks on all cards exit at the same time is mainly triggered by
Sounds as if you disabled _attachments_ to tickets, thus preventing a useful
code patch being added to #139, without disabling new _comments_ on tickets,
which was the spam vector.
- Original Message -
From: David Anderson da...@ssl.berkeley.edu
To: BOINC Developers Mailing List
Lynn W. Taylor wrote
It seems to me that the big fear is the two-week timer: if a work unit
can't be uploaded in two weeks, it's going to be thrown away, causing
irreparable harm to the project and a tragic hit to the cruncher's RAC.
CPDN CM3-160 models can run for 4 months or longer. Losing
Travis Desell wrote:
Is there another variable to determine the time a workunit has spend
on the GPU? For whatever reason our CUDA applications are reporting a
very small cpu_time value, I'm wondering if this is some issue we have
in our application, or if the BOINC client is reporting it
It would be really helpful to be able to know the elapsed_time of a result,
mainly to see exactly how fast these workunits are crunching (and if anyone is
trying to fake results). Maybe add something like an elapsed_time or gpu_time
field to the RESULT struct. Or maybe just change cpu_time
This conversation confused me, until I looked at the actual format of a
sched_reply.xml file.
project_preferences
resource_share100/resource_share
project_specific
format_presets...@home classic/format_preset
text_stylePillars/text_style
graph_styleRectangles/graph_style
Following on from my previous thought, it seems that we aren't paying enough
attention to the scope of some of these settings.
An XML entity has two different scopes:
a scope for the DEFINITION
a scope for the VALUE
Many BOINC preferences are global for both definition and value. They tend
to
to Milkyway and
dedicate a nVidia GPU to Collatz.
--- On Sat, 26/9/09, David Anderson da...@ssl.berkeley.edu wrote:
From: David Anderson da...@ssl.berkeley.edu
Subject: Re: [boinc_dev] Why did BOINC contact Prime Grid, and Why did it DL
execuatables
To: Richard Haselgrove r.haselgr...@btinternet.com
(the 5 minute variety that we have now)
jm7
That would be 32 seconds:
24-Sep-2009 01:05:11 [---] Running CPU benchmarks
24-Sep-2009 01:05:11 [---] Suspending computation - running CPU benchmarks
24-Sep-2009 01:05:11 [climateprediction.net] [cpu_sched] Preempting
Paul D. Buck wrote
The problem is that Eric's script will mean that over a 4 year life of
a computer the CS it earns in year four will take more processing time
to obtain. That is the flaw ... as the performance average increases
the award goes down. So the CS I earned when I started BOINC
Raistmer wrote
Why BOINC should trash whole GPU cache if GPU was not found at current
BOINC
start ?
Why tasks just can't be put in waiting or suspended (to not participate in
scheduling decisions) state until GPU will be available again ?
Although this code was introduced in response to a
Paul D. Buck wrote:
Wasn't me that said so ... it was you (and I quote):
There are also some fairly major drawbacks, and that is why I think
it won't be implemented as is. First, it's immediately deflationary.
The average host that connected to s...@home today earns 292 credits
per day,
and no GPU prefs for now).
-- David
Richard Haselgrove wrote:
Following on from my previous thought, it seems that we aren't paying
enough attention to the scope of some of these settings.
An XML entity has two different scopes:
a scope for the DEFINITION
a scope for the VALUE
Many BOINC
There was an issue with 6.6.17 not saving the proxy changes. This was
fixed in 6.6.18, however there is only a Windows build of this last time I
checked (yesterday).
Cheers,
MarkJ
Would those be v6.10.xx?
There's a v6.10.19 now (just downloaded, about to test), but it's still
Windows
This is a repeat of the request (1) in trac ticket #842.
Request (2) was implemented last month by changeset 19573.
Please re-evaluate request (1).
Hi,
I hope this is the right channel and apologise right now if it isn't
(please tell me ;)
I am running Boinc with GPU support enabled and
Thinking about this, I think it's a classic no-win situation: there is no
right answer.
Consider the simplest possible case:
Two projects attached - one has work, the other doesn't.
Under the old scheme, the project without work is pegged to 0 STD. That
means that the running project is
: boinc_dev-boun...@ssl.berkeley.edu
[mailto:boinc_dev-boun...@ssl.berkeley.edu] On Behalf Of David Anderson
Sent: Thursday, December 10, 2009 7:14 PM
To: Richard Haselgrove
Cc: boinc_dev
Subject: Re: [boinc_dev] Request: Add Snooze/Suspend GPU's only
I added a Snooze GPU checkbox
All GPU apps use CPU too, so suspending CPU implies suspending GPU.
So snooze and snooze GPU is sufficient.
Literally true, but in the context of BOINC there's a big difference between
using 1% of 1 core to support GPUGrid, and 100% of 8 cores to run AQUA. I
think that users choosing to
Process started by BOINC will have priority class of IDLE then newly
created threads by app itself will get NORMAL priority, hence priority of
4 that Richard sees. No contradictions here IMO.
Aqua devs probably need to change process, not thread, priority if they
don't like what they have
Raistmer, you're reading far more into my comment than it deserves.
All I was saying is that if four cores finish quick, medium, medium, slow,
and everything has to wait for slow to finish, then there are some idle
cores for a minute or two. If they all finish medium, medium, medium,
medium,
of its processes.
We'll have to figure out a solution that works on Linux/Mac and Windows, and
doesn't cause problems with OpenMP. - by which point, I'm well out of my
depth.
Hopefully that will fix the problem;
I'd like to avoid dealing with variable # of threads.
-- David
Richard
priority
to be the same as this value.
Other developers of MT applications please copy: congratulations and thanks
to the AQUA devs for being so responsive.
Hopefully that will fix the problem;
I'd like to avoid dealing with variable # of threads.
-- David
Richard Haselgrove wrote
There's also a converse problem if a project supplies too-small job FLOP
counts, in that EDF may not be invoked soon enough: this particularly
applies if a long, under-estimated task follows a succession of shorter
and/or better estimated tasks.
We first saw this clearly with the introduction
Can you be more specific?
We've spent the last several years trying to make
the software more accessible/usable,
and adding social/community features.
We've made lots of progress in both areas.
What have we missed?
-- David
I think you've possibly created a lot of little communities,
A wonderful example of the multiple community problem has just presented
itself.
David, minor broken link on the BOINC website - but I've written my comment
in
http://boinc.berkeley.edu/dev/forum_thread.php?id=5358
to highlight this issue too.
- Original Message -
From: Richard
for a specific project. If there were only a single forum,
that would be likely to disappear.
jm7
Richard
Haselgrove
r.haselgr...@bti To
nternet.com David Anderson
Sent
No amount of aggregating, nor any number of aggregation sites to browse,
won't help if project admins won't play their part by posting news in a
timely and accessible fashion.
I was about to cite Orbit as a valuable project I would like to devote more
time to, but which seemed to have fallen
I know that email lists can be used for behind-the-scenes communications.
But they're far less efficient than a hidden forum area, if only because
on
a forum you see all the posts in the right chronological order. I work in
a
school where I do not know how we could function without the
In setting up my AQUA app_info today, I also found that I had to allow the
appropriate application in project preferences. This was unexpected new
behaviour for someone brought up on the SETI way of doing things, but I
think it's a good idea.
As to needing to allow CPU work for a GPU app -
This may be an over-simplistic suggestion - but why not keep track of how
long the *last* RPC took, and don't start another one until both the RPC
itself, and all necessary post-processing (sorting, display updating) have
completed?
Putting absolute values in would be a bad idea - there'll
-day tasks.
So max_wus_in_progress is another candidate for migration to
per-app-version, please.
- Original Message -
From: David Anderson da...@ssl.berkeley.edu
To: Richard Haselgrove r.haselgr...@btinternet.com
Cc: john.mcl...@sybase.com; boinc_dev@ssl.berkeley.edu
Sent: Wednesday
, the inhibit flag would revert to being controlled by user
preferences - and at the same time, the user preference control would become
available on the web preferences page.
- Original Message -
From: David Anderson da...@ssl.berkeley.edu
To: Richard Haselgrove r.haselgr...@btinternet.com
Cc
Moving DCF down from a project scope to an app_version scope is clearly
necessary, but I think there are many unanswered questions about the
server-based approach, and potential pitfalls.
Speed of change / settling time
At the moment, the standard change is 10% of difference, per task exit -
this caution and generating very low DCF values for a string of -9 exits.
BTW, this is not just SETI, there are other projects that can have tasks
that exit early.
jm7
Richard
Haselgrove
r.haselgr...@bti
The AQUA devs have asked me to speak on their behalf in the current max_wus
discussion (well, Kamran has, and Neil - who was copied in on the email -
hasn't demurred).
Kamran also asked me to raise another, different, issue in this general
area. Regular readers know that the estimated time To
El 22/03/2010, a las 17:39, Richard Haselgrove
r.haselgr...@btinternet.com escribió:
The AQUA devs have asked me to speak on their behalf in the current
max_wus
discussion (well, Kamran has, and Neil - who was copied in on the email -
hasn't demurred).
Are project developers afraid
BTW, the reason for the caution on reducing the DCF on the client if it is
very high is the very real problem with a batch of SETI -9 exit results.
Get a few dozen of these in a row, and you will discover that there is too
much work fetched the next work fetch, unless caution is used.
Yes, just tested - the /a administrative setup option is still present in
v6.10.54
But it still has the problems I desribed in
http://boinc.berkeley.edu/trac/ticket/847 - my INSTALLDIR and DATADIR have
both been set to
W:\downloads\BOINC setup files\v6-10-54_admin\program files\BOINC\
I'm sure we will get the usual reminders on this one: the limit was
introduced, again on a project-specific basis (though I don't know which
project it was - before my time), because a rogue application generated
uploadable data faster than it could be returned to the project.
I wonder when
Joe Segur made a relevant observation on the SETI message board.
When the 'whole system' conveyor belt of jobs is interrupted, all components
come to a halt. New work can't enter the system until old work is fully
processed and the disk storage space recyled.
Judging by Scarecrow's graphs, it
I've never noticed point (2) to be a problem.Successful returns increase the
quota for the *current* day, not for subsequent days: so, in the worst case
scenario (unless the doubling algorithm has been changed with the new server
code):
The user only notices the problem after quota has already
Allowing a 'bonus' on quota for a validated task gets round the
astronomical
numbers that can be processed by successful, but idiotic reports such
as
SETI overflows on faulty GPUs.
Such results will be returned, but NOT validated. So they don't recive
validation bonus.
Exactly. that's
There are indeed two issues here, but I'd categorise them differently.
The SETI -9 tasks are really difficult, because the insane science
application produces outputs which appear to the BOINC Client to be
plausible. It's only much, much further down the line that the failure to
validate exposes
For the benefit of our wider BOINC readership, could we exercise a little
care and precision in our terminology and logic, please?
As you say, these overflow results may be either valid or invalid, and that
will be determined solely by the server, some days or weeks after BOINC
reports - too
Actually, when GPU comes into corrupted state, other projects could
experience same problems.
I.e., device works (not report errors), but makes invalid computations
(or provides invalid data).
Invalid data for SETI most probably leads to pulses overflow.
Invalid data for some another
Ah. My memory isn't failing after all.
Mo's http://boinc.berkeley.edu/trac/ticket/985
- Original Message -
From: Richard Haselgrove r.haselgr...@btinternet.com
To: boinc_dev@ssl.berkeley.edu
Sent: Tuesday, May 25, 2010 9:20 PM
Subject: Re: [boinc_dev] host punishment mechanism revisited
That's a question for each project, not BOINC centrally.
It comes in two parts:
Does it need fast (valid results)?
(e.g. to generate dependent work, or to purge processed files from a small
server)
Can it cope with fast errors?
(does the demand for new work from a fast-erroring host cause
.ssl.berkeley.edu:80; No error
03-Jun-2010 09:41:23 [---] [http_debug] [ID#1439] Info: Expire cleared
03-Jun-2010 09:41:23 [---] [http_debug] [ID#1439] Info: Closing connection #0
03-Jun-2010 09:41:23 [---] [http_debug] HTTP error: Couldn't connect to server
--- On Wed, 2/6/10, Richard Haselgrove
.
--- On Fri, 4/6/10, Richard Haselgrove r.haselgr...@btopenworld.com wrote:
From: Richard Haselgrove r.haselgr...@btopenworld.com
Subject: Re: [boinc_dev] host punishment mechanism revisited
To: David Anderson da...@ssl.berkeley.edu
Cc: boinc_dev@ssl.berkeley.edu
Date: Friday, 4 June, 2010, 1:44
info still says
Number of tasks completed 1183
Max tasks per day 218
Number of tasks today 273
Consecutive valid tasks 118
Average turnaround time 0.45 days
- Original Message -
From: Richard Haselgrove r.haselgr...@btopenworld.com
To: David Anderson da...@ssl.berkeley.edu
Cc: boinc_dev
the host contacts the scheduler.
-- David
Richard Haselgrove wrote:
There's definitely something wrong with the (daily) quota resetting
mechanism - whether that's the fault of the new code, or SETI's Beta
server, I'll leave to you.
Host 12316 last downloaded a SETI Beta task at 4 Jun 2010 20
Jun 2010 at 20:27, David wrote:
The host info is only updated when the host contacts the scheduler.
-- David
Richard Haselgrove wrote:
There's definitely something wrong with the (daily) quota resetting
mechanism - whether that's the fault of the new code, or SETI's Beta
server, I'll
This is really addressed to the whole BOINC development team, but I suppose to
David in particular.
I wonder if it would be possible for you to post, hopefully in the
not-too-distant future, a summary of where we're up to with the whole
'CreditNew' bundle (which of course includes important
Meanwhile, if Lynn and Raistmer have finished with their [rr-sim] of the SETI
message boards, is there any news of the state of the BOINC code?
- Original Message -
From: Richard Haselgrove r.haselgr...@btinternet.com
To: David Anderson da...@ssl.berkeley.edu; Eric J Korpela
korp
Only that it's yet another change which may ripple through to unexpected
places, and require re-writes of both internal and external documentation.
And as always, the transition can cause confusion, while some users have
older clients and others have newer.
Provided the transition, and the
Yes please. During my all-to-brief experience as a SetiMod, I felt the need
for a PM (they were just becoming popular) alternative notification to the
email-only Mod procedure.
And there's one gap where no notification is issued, and it causes
confusion.
When a *thread* is moved by a
The new-ish Fermi class of NVidia GPUs has much better hardware support for
multitasking than its predecessors. I don't know of any project that is
using this officially so far, perhaps because Fermi-class GPUs are still
expensive and comparatively rare. but prices are falling and sales
You might want to think about transitional arrangements for this - there
have been reports of quite high numbers of lost results at SETI recently (up
to 1,000 or more per host, IIRC - can't check at the moment, as the message
boards are down). Maybe start with 'lost tasks less than a week old'
--- On Fri, 15/10/10, David Anderson da...@ssl.berkeley.edu wrote:
After a considerable amount of work,
I have resurrected the BOINC client simulator,
with some major improvements:
...
2) it takes existing files (client_state.xml, cc_config.xml,
global_prefs.xml) as input rather than
I just tried the simulator, and got a page full of badly formatted text. I
thought there was supposed to be a graph of some sort. Using IE7.
jm7
I tried it a little while ago, and got both graphs viewable in IE8, and some
properly-formatted, tabular output.
Also got some unexpected
My first reaction, on reading this, was that resource share should be based
on work done, measured directly, rather than by using credit as a surrogate.
But I've read the design document, and you've covered that point - and also
taken into account the immediate consequence, which is that
Estimated Credit:
If there are two GPUs with very different characteristics, but both of the
same type, would they be counted separately? It is my belief that they
should be as it is entirely possible that there is a large speed factor
between 2 GPUs installed in the same system.
jm7
It
I agree with John - this is a major change, and will need extensive testing.
David, may I ask what your current expectation is for the timeline for the
new scheduler? Specifically, are you going to attempt to incorporate it into
v6.12, or would it be better to get all the 'notices' angst out of
, Richard Haselgrove wrote:
I agree with John - this is a major change, and will need extensive
testing.
David, may I ask what your current expectation is for the timeline for
the
new scheduler? Specifically, are you going to attempt to incorporate it
into
v6.12, or would it be better to get
David,
While you're looking at GPU preferences, would it be possible to look again
at the trio of
Use CPU Enforced by version 6.10+
Use ATI GPU enforced by 6.10+
Use NVIDIA GPU enforced by 6.10+
preferences we introduced about a year ago (I think those are the latest
versions, though
If anyone finds themselves looking around in this sort of code, could I also
mention that I've sometimes seen that the font and/or size used for the
preview of message boards posts is different from the final display after
posting: it's most noticeable if you're trying to line up columns. If I
from reality, as appears to have
happened with host 5116976.
On 08-Dec-2010 12:21 PM, Richard Haselgrove wrote:
Now that SETI is back up, details have emerged of a host where the records
appear to indicate that v6.08 (plan_class cuda) is faster than v6.09
(cuda23). At SETI, this would
There's a spammer active on the BOINC message board, and Ageless is away.
Anyone around with the power to banish the account?
http://boinc.berkeley.edu/dev/forum_thread.php?id=5841nowrap=true#36008
___
boinc_dev mailing list
boinc_dev@ssl.berkeley.edu
Although v6.2 and above have been around for a long time, well over two
years, some issues like http://boinc.berkeley.edu/trac/ticket/652 render it
unusable on certain hosts.
My last message was wrong.
Clients before 6.2 (early 2008) didn't know about plan class;
they assumed app versions
I don't think David was saying that only one model number can be handled per
manufacturer - many user posts on many message boards would put the lie to
that.
He said that only one 'type' of GPU can be handled, which in turn I take to
mean that only one main_program/ can be specified per GPU
Now that GPU task switching by short term debt has successfully replaced FIFO
scheduling, would it be possible to add
cuda_short_term_debt
ati_short_term_debt
to the set_debt function in gui_rpc_server_ops (line 944 onwards)?
This may require reverting the current
cuda_debt
ati_debt
to
I think this might be a thread (as opposed to application) priority issue.
Compare these two Process Explorer screenshots:
http://img834.imageshack.us/img834/9496/einsteincudapriority.png
http://img703.imageshack.us/img703/3087/seticudapriority.png
(both taken from the same computer, during the
There is an updated version of that list at
http://setiathome.berkeley.edu/forum_thread.php?id=62573nowrap=true#1062822
Given the limited number of hosts involved, in the short term the wastage
(something of the order of 10% of SETI's limited download bandwidth), maybe
could/should be stemmed
OK, back-of-envelope error - that 10% overstates the size of the problem.
But, more realistically, 20 hosts @ 2,000 tasks wasted each per day is close to
4% of all multibeam tasks issued.
- Original Message -
From: Richard Haselgrove
There is an updated version of that list
.
--
Joe
On Fri, 07 Jan 2011 06:41:07 -0500, Richard Haselgrove wrote:
OK, back-of-envelope error - that 10% overstates the size of the problem.
But, more realistically, 20 hosts @ 2,000 tasks wasted each per day is
close to 4
If you're able to work on any of this, would it be possible to integrate the
Moderation system with the Personal Message system (which came after it)?
Not thinking of the major issues that Pappa has in mind, but simple things
like 'post move' and 'thread move'. For recently-joined users, PMs
Yes, it was removed from sched_result.cpp by
http://boinc.berkeley.edu/trac/changeset/23118
- Original Message -
From: Travis Desell des...@cs.rpi.edu
To: boinc_proje...@ssl.berkeley.edu; boinc_dev@ssl.berkeley.edu
Sent: Wednesday, April 06, 2011 11:40 PM
Subject: [boinc_dev] results
1 - 100 of 311 matches
Mail list logo