While you lot have been chatting (), I have opened trac tickets #896
and #897
Does the reporting style (level of detail, sub-headings) seem appropriate?
If so, I'll hang on to the template.
- Original Message -
From: "Kathryn Marks"
To: "Eric Myers"
Cc: "John.McLeod" ;
Sent: Thursd
Rom Walton wrote
I think I can do that in a day. I think I'll use the remaining time
allocated to the setup work to get the installer working on domain
controllers again.
Please don't forget that there's some registry messiness in /a
administrative setups which could use a bit of installe
This is an issue which I, mo.v from CPDN, and others have requested action
on over the years. Have a read of http://boinc.berkeley.edu/trac/ticket/147.
The latest enhancement is that ctrl-shift-A should allow your friend to
escape the 'trapdoor' of the one-way switch to simple view. How that
tr
Charlie Fenton wrote
> Unfortunately, WCG who designed and implemented the Simple View
> specified there should be no menus in Simple View other than the
> minimum required by the OS, so there is no indication of this
> shortcut. The best I could do without violating this requirement is
> to incl
Since the circumstances seem to be easier to re-create in the current SETI
configuration (almost out of Astropulse work), would you like me to try for
more [rr-sim] logs - and any other flags you care to suggest?
> Richard Haselgrove wrote:
>> 17/04/2009 11:21:51 [wfd] target wo
1) Correct to close #869: tested and working (at the third attempt ;-) )
2) Can close #897 - fix in v6.6.31 seems effective. #890 already closed.
3) I don't mind the wontfix on #620 - I though it was a clumsy solution -
but I think David's closure note misses the point.
"The client keeps track
s yet, but
offers of help
- Original Message -
From: "David Anderson"
To: "Richard Haselgrove"
Cc: "BOINC Developers Mailing List"
Sent: Wednesday, June 03, 2009 12:30 AM
Subject: Re: [boinc_alpha] Work fetch bug continues in v6.6.28
>I have
It's ruined Josef Segur's signature.
- Original Message -
From: "Michael Roberts"
To: "BOINC Dev Mailing List"
Sent: Thursday, June 04, 2009 7:13 PM
Subject: [boinc_dev] changeset 18285 only partially applied?
> It looks as if the changes to main.css in
> http://boinc.berkeley.edu
Nicolás Alvarez wrote
> ...? What is this about?
It's private SETI cafe-talk.
Michael, you'd do better sending that to
seti_moderators -at- ssl.berkeley.edu
___
boinc_dev mailing list
boinc_dev@ssl.berkeley.edu
http://lists.ssl.berkeley.edu/mailman
David Anderson wrote
> My inclination would be to simplify the "user info" box even further,
> e.g. to remove the credit stuff,
> and replace "Joined: Aug 8 08" with "Joined: 10 months ago" or something
> -- David
Please don't. People rather like celebrating their BOINC "birthdays",
especially i
David Anderson wrote
> I'll do this if someone sends me an array
> that parallels the existing $countries array (see below)
> containing the 2-letter country codes used by the FamFamFam images
> -- David
A few errors have crept into http://boinc.berkeley.edu/trac/changeset/18390
I used to do thi
essage -----
From: "Jorden van der Elst"
To: "Richard Haselgrove"
Cc: "David Anderson" ;
Sent: Friday, June 12, 2009 12:20 PM
Subject: Re: [boinc_dev] Add country to forum profile
> On Fri, Jun 12, 2009 at 1:05 PM, Richard
> Haselgrove wrote:
>> D
sn't, so perhaps
your UN image could be duplicated there ]
- Original Message -
From: John 37309
To: Richard Haselgrove
Cc: David Anderson ; BOINC Dev Mailing List
Sent: Friday, June 12, 2009 2:21 PM
Subject: Re: [boinc_dev] Add country to forum profile
Richard,
I've had a search of my logs, and found a few similar instances:
19-Jun-2009 18:04:19 [s...@home] [sched_op_debug] CPU work request: 0.00
seconds; 0 idle CPUs
19-Jun-2009 18:04:19 [s...@home] [sched_op_debug] CUDA work request: 130248.00
seconds; 0 idle GPUs
19-Jun-2009 18:04:34 [s...@home] Sche
> The only mechanism for changing it is to modify the duration correction
> factor. The duration correction factor will correct itself over time
> anyway, but it is possible to set it by hand to be closer to where it will
> eventually stabilize.
>
> jm7
The problem with relying on DCF is that no
> [ Does anyone run training courses for new BOINC administrators? ]
That was just a throwaway line in my grouse about admins who don't supply
credible figures, and BOINC developers who don't initiate
DCF corrections until after the horse has bolted - sorry, task has
completed.
But the last 4
If you have multiple login accounts for the computer, I believe you have to
repeat step (3) (heeding the warning in step (4)) for each separate user
account.
- Original Message -
From: "William"
To:
Sent: Tuesday, July 14, 2009 8:08 PM
Subject: [boinc_dev] Release Notes for 6.6.36 in
> FYI
> http://tech.slashdot.org/story/09/07/14/1743230/BOINC-Exceeds-2-Petaflops-Barrier
>
> Cheers,
> Reinhard.
Most project Server Status pages, such as
http://einstein.phys.uwm.edu/server_status.php, explain that they derive
their FLOP-eqivalence "from the sum of the Recent Average Credit (R
> That only works if you have the bandwidth to actually get the data to the
> validator. The problem at the moment in SETI is the last mile of internet
> connection. There are several possible solutions, but having the upload
> servers else where does not really help that much. The uploaded data
>> Advantages
>>
>> Relatively simple server requirements for the 'data concentrator' - just
>> cgi, and some filesystem-level cron scripting
>> Much cheaper than $80,000 for 'fibre up the hill'
>> Quicker to implement than more esoteric suggestions - I don't think there's
>> anythin
I didn't persue it last time, because I was using an old version of BOINC, and
saw no reason to upgrade at the time - this was 2008 BC (before CUDA). I did
ask if anyone could confirm my observation with a current BOINC, but answer
came there none.
I am, however, glad that I posted on a message
Lynn W. Taylor wrote:
> Some sort of daemon on the upload server would then send work back to
> the main site as quickly as possible without saturating the link.
>
> ... but you're still limited to 80 or 90 megabytes.
>
> It doesn't solve the problem (too many simultaneous connections) it just
> m
> You do get some increased throughput if the problem is dropped connections
> and packets, and the distributed upload servers have sufficiently better
> connections, and the link has to the final upload server has sufficient
> bandwidth to handle the load if the connections are carefully controlle
- Original Message -
From: "Maureen Vilar"
To: "Rom Walton"
Cc: ; "Boinc Projects"
Sent: Thursday, August 06, 2009 5:45 PM
Subject: Re: [boinc_dev] [boinc_projects] Progress Thru Processors Launched
>I haven't installed PtP myself because I don't want an account manager.
>
> Is it tr
It's on #713 as well - not yet cleaned.
- Original Message -
From: "Jorden van der Elst"
To: "Rom Walton"
Cc: "BOINC dev"
Sent: Friday, August 07, 2009 1:01 PM
Subject: Re: [boinc_dev] *SPAM* Re: Spam on Trak tickets
> It's back, undeterred by whatever spam filter.
>
> I just cleane
No, that's the only notification I've got since the last batch.
> Done. Any more?
>
> On Fri, Aug 7, 2009 at 2:26 PM, Richard
> Haselgrove wrote:
>> It's on #713 as well - not yet cleaned.
>
> --
> -- Jord.
>
___
> No, that's the only notification I've got since the last batch.
>
>> Done. Any more?
Spoke too soon. #621, #896
___
boinc_dev mailing list
boinc_dev@ssl.berkeley.edu
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the a
When checking this report, please consider other OSs as well. There have
been repeated, but inconclusive, reports from Windows users with multiple
CUDA cards (such as Fred W) of similar behaviour. In Windows, it sounds as
if this "tasks on all cards exit at the same time" is mainly triggered by
Sounds as if you disabled _attachments_ to tickets, thus preventing a useful
code patch being added to #139, without disabling new _comments_ on tickets,
which was the spam vector.
- Original Message -
From: "David Anderson"
To: "BOINC Developers Mailing List"
Sent: Wednesday, August
I thought when the 'received' timestamp was introduced
(http://boinc.berkeley.edu/trac/changeset/18531), it was only intended to
influence the running order of GPU tasks?
Maybe it should be extended to schedule intra-project CPU tasks as well,
leaving inter-project scheduling to debt (which GPU
Lynn W. Taylor wrote
> It seems to me that the big fear is the two-week timer: if a work unit
> can't be uploaded in two weeks, it's going to be thrown away, causing
> "irreparable harm to the project" and a tragic hit to the cruncher's RAC.
CPDN CM3-160 models can run for 4 months or longer. Lo
Travis Desell wrote:
> Is there another variable to determine the time a workunit has spend
> on the GPU? For whatever reason our CUDA applications are reporting a
> very small cpu_time value, I'm wondering if this is some issue we have
> in our application, or if the BOINC client is reporting i
It would be really helpful to be able to know the elapsed_time of a result,
mainly to see exactly how fast these workunits are crunching (and if anyone is
trying to fake results). Maybe add something like an elapsed_time or gpu_time
field to the RESULT struct. Or maybe just change cpu_time t
Nicolás Alvarez wrote:
No, the client doesn't measure "GPU time" at all. Is it even possible to
get that information from nvidia APIs?
I couldn't find such an API.
The flow of a CUDA app is like this:
a) the CPU part launches a GPU kernel and sleeps
b) the kernel executes; when it's done, the
Exactly the same phenomenon happens when there *are* files to upload.
Task finishes : cache recalculated with new DCF : shortfall found : work
fetch issued : result upload starts : scheduler contact complete : upload
complete : another scheduler contact needed later.
That's what we were trying
Unfortunately, a driver which works with one application for one developer may
not work for a different developer with a different application, even on the
same card.
I was recently comparing MilkyWay, where a user was getting nothing but errors
with Catalyst 13.4 and their current app, but suc
We had some discussion about this issue back in January.
For most of the lifetime of BOINC on GPUs, CPU throttling has *not* been
applied to GPU apps:
it was dis-applied on 27 Oct 2008 -
http://boinc.berkeley.edu/trac/changeset/3719268f1c3807dcac821c455090e2243133bb8d/boinc-old
and re-applied o
What do we mean by 'use' a certain fraction of a CPU, anyway?
AFAIK, projects have a rather crude tool by which they declare what proportion
of an application's fpops are performed on the CPU - x in
the xml format of the plan class definition - and nothing else. The *scheduled*
CPU usage (and I
I think we need to be careful about the scoping validity of data, and we
haven't always been in the past.
In this particular case, I think that it's a reasonable assumption that the RAM
requirements of an application (again in the BOINC terminology) on one platform
have a deterministic relation
It's there already, but might be reported under the alternative 'pni' (Prescott
New Instructions) tag.
>
> From: Radim Vančo
>To: BOINC Dev Mailing List
>Sent: Thursday, 8 August 2013, 10:13
>Subject: [boinc_dev] SSE3 detection
>
>
>I am now setting plan_clas
Then I still think we have a problem with the sample plan classes, e.g. in
http://boinc.berkeley.edu/trac/browser/boinc-v2/sched/plan_class_spec.xml.sample,
where the SSE3 plan class does refer to the as sse3
Form what I remember the last time I looked at this, the CPU feature string is
search
I reported this some time ago, as part of a discussion at LHC. Since then, LHC
Classic has run with two separate plan classes, one for sse3 and one for pni,
with a bitwise-identical binary.
Plan class SSE3 works on most modern Intel CPUs, because the SSE3 string is
found within the SSSE3 (tripl
OK, that works on Windows too:
07/08/2013 22:16:03 | | Processor features: fpu vme de pse tsc msr pae mce cx8
apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss htt
tm pni ssse3 cx16 sse4_1 sse4_2 popcnt syscall nx lm vmx tm2 pbe
>
> Fr
In this context, I'm slightly worried about the line
688 if (hasSSE3) strcat(capabilities, "sse3 ");
in hostinfo_unix.cpp
>
> From: David Anderson
>To: boinc_dev@ssl.berkeley.edu
>Sent: Thursday, 8 August 2013, 18:59
>Subject: Re: [boinc_dev] SSE3 detection
PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT
TM PBE SSE3 DTES64 MON DSCPL VMX SMX EST TM2 SSSE3 CX16 TPR PDCM SSE4.1 XSAVE
Tue 10 Jul 23:00:11 2012 | | OS: Mac OS X 10.6.8 (Darwin 10.8.0)
>
> From: David Anderson
>To: Richard Haselgrove
>Cc: &
SSE3 (double S) is there too.
>
> From: Charlie Fenton
>To: Richard Haselgrove
>Cc: David Anderson ; "boinc_dev@ssl.berkeley.edu"
>
>Sent: Thursday, 8 August 2013, 23:45
>Subject: Re: [boinc_dev] SSE3 detection (Radim Van?o)
&
Set the
1
log flag. Output:
24/08/2013 11:57:35 | Milkyway@Home | [cpu_sched] Preempting
ps_nbody_08_06_dark_2_1372784655_817061_4 (left in memory)
(that was removed from the default logging set some years ago)
>
> From: Toralf Förster
>To: "boinc_dev
I suggest you re-read the discussion on this subject on this list last month,
starting from
http://lists.ssl.berkeley.edu/pipermail/boinc_dev/2013-July/020215.html
>
> From: Radim Vančo
>To: boinc_dev@ssl.berkeley.edu
>Sent: Sunday, 25 August 2013, 10:49
>Sub
Would it be possible to 'harden' the BOINC server operations console, please?
I'm thinking about adding prominent warnings against common - but far-reaching
- errors, and perhaps even some sanity checking on inputs.
Originally, BOINC was developed as a cheap, lightweight platform which would
al
This looks very useful, both for application developers/testers, and when
helping volunteers diagnose faults via a project message board.
But either process is slowed down and made harder by a missing link in the
chain.
The local client knows a task only by its name, as in the list below.
The p
We have had occasional, spasmodic, reports at SETI of large numbers of tasks
being inexplicably marked as 'abandoned'.
The first batch occurred between February and May 2013, and are documented in
http://setiathome.berkeley.edu/forum_thread.php?id=70946
We only received reports from a small num
No problem - don't mention it.
Always good to find a nice easy one we can sort out by ourselves, before the
Americans have even finished breakfast...
Well, we have that one fixed, with thanks to Richard Haselgrove.
>
>He figured out that the Language notation in BOINC Manager matt
You were careful to put that in an block, rather than a
block, I presume?
>
> From: Charles Elliott
>To: boinc_dev@ssl.berkeley.edu
>Sent: Sunday, 29 September 2013, 14:25
>Subject: [boinc_dev] (no subject)
>
>
>In cc_config.xml:
>
>
>
> h
The BOINC server does not send out any workunits to anyone, ever - Beta or
otherwise.
You might be confusing us with the SETI @ home project, which does have a
server problem currently - I have emailed some summary logs to the *project*
staff.
>
> From: Charle
There is also a message board thread reporting problems - possibly from the
same person.
http://boinc.berkeley.edu/dev/forum_thread.php?id=8684
>
> From: Eric J Korpela
>To: "boinc_dev@ssl.berkeley.edu"
>Sent: Sunday, 27 October 2013, 23:41
>Subject: [boinc_
There seem to be some useful tools in
http://boinc.berkeley.edu/trac/wiki/AssignedWork
@David: Is there an easy way for me to assign this results to specific
>users? I want to focus on users that I can contact in our forum to check
>on the stderr.txt during runtime.
>
>Regards
>Christian
>
_
I think we ought to look into this question of 'abandoned' results more
carefully.
I can only find 'mark_results_over(host)' in two places in handle_request.cpp,
and they're both to do with hosts where there's "evidence that the host has
detached". But we get a steady dribble of reports - mostl
r, only to find that the results are not accepted
scientifically, and no credits are awarded. Even synchronisation via "aborted
by server" (preventing the waste of time/resources) would be better than that.
>____
> From: Stephen Maclagan
>To: Rich
Given that Moore's Law applies to server hardware too, do we have any evidence
that any current BOINC project server would be over-stressed by one RPC per
hour per active host? CPDN have adopted a 3600 second backoff between RPCs, but
I believe this is mostly to prevent individual hosts download
There's been an interesting little experiment conducted under this title in the
Einstein@Home Cruncher's Corner:
http://einstein.phys.uwm.edu/forum_thread.php?id=10517
Today's conclusion says:
"So I think that we can now say yes it is possible to crunch GPU work in a
virtualized world using "p
I just did a git pull update to a boinc-v2 clone last refreshed about a
fortnight ago. Ran cleanly, no errors.
>
> From: Rom Walton
>To: Toralf Förster ; Joachim Fritzsch
>
>Cc: boinc_dev@ssl.berkeley.edu
>Sent: Friday, 31 January 2014, 19:32
>Subject: Re: [b
I thought we had this protection in place already?
Specifically, since your checkin 60fc3d3 of April 2011:
"client: defer reporting completed tasks if an upload started recently;
we might be able to report more tasks once the upload completes."
http://boinc.berkeley.edu/trac/changeset/60fc3d3f22
"... the time spent already (the only really well known item in the list) ..."
Which is why I'd like to see greater weight given to this in the displayed
estimate. It seems the be the wrong way round, to distort the displayed figures
for "well-behaved" (in the mathematical sense) projects, just
As it happens, SIMAP is one of the projects which could honestly use the
"estimates are linear and can be trusted" flag, if available.
>________
> From: Richard Haselgrove
>To: David Anderson ; BOINC Developers Mailing List
>
>Sent: Satur
| | [work_fetch] Request work fetch: project finished
uploading
12/02/2014 17:52:36 | Einstein@Home | Scheduler request completed: got 5 new
tasks
>
> From: Richard Haselgrove
>To: David Anderson ; BOINC Developers Mailing List
>
>Sent: Tuesday, 11 Febru
suggested to you privately
at the time.
>________
> From: David Anderson
>To: Richard Haselgrove
>Cc: BOINC Developers Mailing List
>Sent: Monday, 9 September 2013, 3:22
>Subject: Re: [boinc_dev] new GUI RPC for getting completed /reported tasks
>
>
>Good idea! I made the foll
Unfortunately, some projects take the easy cop-out - applying a massive
rsc_fpops_bound to circumvent resource limit exceeded, instead of resolving it
properly from first principles.
I suspect that some of the smaller projects have enough on their plate getting
their heads round their own scien
But I think we had a change, somewhere along the way from "years ago", which
tipped the emphasis even further away from 'linear' and towards 'based on
initial estimate'. I'll try to find it.
>
> From: "McLeod, John"
>To: William ; Oliver Bock ; Jon
>Sonntag
power.
>
> From: Nicolás Alvarez
>To: Richard Haselgrove
>Cc: BOINC Developers Mailing List
>Sent: Tuesday, 18 February 2014, 16:05
>Subject: Re: [boinc_dev] Estimated Time Remaining, frictional reporting ...
>
>
>There are two different values,
ring by send time/date.
>> Would it be possible to get custom column sorting instead (e.g. by clicking
>> on the column head)? That would also make it possible to sort by device,
>> i.e. find CPU/GPU tasks quickly, something that is sadly missing atm.
>> The search funct
At Einstein, you can do this via the project preferences page.
You can do it at any other project using an app_config.xml file.
>
>
> Also,
>it would be nice to execute more than one WU per GPU.
>
>Charles Elliott
>
__
pull' problem, not a 'push' problem.
>
> From: "McLeod, John"
>To: "elliott...@verizon.net" ; 'Richard Haselgrove'
>; 'Nicolás Alvarez' ;
>'William Stilte'
>Cc: 'BOINC Developers Mailing List'
>Sen
With the release of the GTX 750/750Ti 'Maxwell' GPUs utilising the GM107 chip,
could somebody please update /lib/coproc.cpp so that the peak_flops value for
this architecture is calculated correctly?
Referring to
http://international.download.nvidia.com/geforce-com/international/pdfs/GeForce-GT
Hear, hear.
When I was in a similar position (25 years ago), I was fortunate on two counts:
1) I was hired to write code only.
2) My project sponsor - who was *not* a coder - recognised the need for proper
user documentation from the outset.
Her solution was to recruit a professional author - p
That sounds as if you haven't fully considered the write-caching abilities, and
multi-threaded nature, of modern operating systems - that Eric was trying to
alert you to.
There are, admittedly, two cases to be considered:
1) Client crash, leaving cached data available to be handled by the opera
Looking around at SETI, trying to analyse how well the server was performing
while the status page was dysfunctional, I came across this user.
http://setiathome.berkeley.edu/show_host_detail.php?hostid=7220378
He (and his - somewhat elderly - computer) have been members for about 3 weeks.
He's
The workunit in question,
http://boinc.thesonntags.com/collatz/workunit.php?wuid=5741706, has two
results: the first had an initial deadline of 8 May (normal), and it's only the
retry which has the reduced 2-day deadline. I imagine David's reference to
http://boinc.berkeley.edu/trac/wiki/Projec
nding on server configuration. Nothing is lost.
But if the *server* marks the work as 'abandoned', but the client retains it
and continues to work on it, then electricity is wasted.
>____
> From: Jon Sonntag
>To: Richard Haselgrove
>Cc: "boin
Thanks David - this looks very helpful.
Could you possibly add to the list of tags allowed in
app_config.xml, so that we can use the new feature at projects which don't
update their server code very often?
>
> From: David Anderson
>To: Boinc Projects ; BOINC De
Interesting stuff, David.
I'm not sure I'm convinced by the total at the foot of the GFLOPs/computer
column. Doesn't that come out as something like "total CPU power available to
the project if there was one computer of each type attached to the project, and
it was running full time"?
"Total C
That entry in client configuration reads like it might be a left-over from
BOINC v5 and before.
BOINC v6 and later separated BOINC into 'program' and 'data' locations:
cc_config.xml lives in the data directory already, so it's a bit
self-referential to have another setting option there.
In Win
Provided the Berkeley builds keep reasonably up-to-date with the development
mainline. There have been some lengthy pauses recently, e.g. during the Android
push - as you can tell by the length of the changelist for v7.3.18
>
> From: Jord van der Elst
>To: Rom
And bad form, with two separate issues to report. Sorry again.
1) Use of outlier detection to avoid skewed averages
2) Initial runtime estimates on the Android platform
1) Outlier detection.
This arises from the recent introduction of a new app_version at the LHCclassic
project. LHC, by its ver
The 'projected flops based on PFC avg' reports come from the 'Albert' test
server, which has recently been updated (within the last 7 days) to the current
BOINC master server code. Germany has a public holiday today, but I'm sure
Bernd will help facilitate investigations into this issue when tim
e: [boinc_dev] EXIT_TIME_LIMIT_EXCEEDED (sorry, yes me again, but
>please read)
>
>
>Does this problem occur on SETI@home?
>-- David
>
>On 07-Jun-2014 2:51 AM, Richard Haselgrove wrote:
>
>> 2) Android runtime estimates
>>
>> The example here is from SIM
s are
set for 'keep applications in memory when suspended', and I haven't seen any
similar behaviour: but my CPUs are all Intel, and as I said in that post,
"running on the spot" seems to be reported more often from AMD hosts.
>________
&
http://boinc.berkeley.edu/gitweb/?p=boinc-v2.git;a=commit;h=7b2ca9e787a204f2a57f390bc7249bb7f9997fea
>
> From: Josef W. Segur
>To: David Anderson
>Cc: "boinc_dev@ssl.berkeley.edu" ; Eric J Korpela
>; Richard Haselgrove
>Sent:
card parameters, then
scaled back by rule of thumb.
Maybe we should standardise on just one standard?
>________
> From: Richard Haselgrove
>To: Josef W. Segur ; David Anderson
>
>Cc: "boinc_dev@ssl.berkeley.edu"
>
/S excursions.
>
>Charles Elliott
>
>> -Original Message-
>> From: boinc_dev [mailto:boinc_dev-boun...@ssl.berkeley.edu] On Behalf
>> Of Richard Haselgrove
>> Sent: Tuesday, June 10, 2014 5:09 AM
>> To: Richard Haselgrove; Josef W. Segur; David Ande
a
>To: David Anderson
>Cc: Richard Haselgrove ; Josef W. Segur
>; "boinc_dev@ssl.berkeley.edu"
>
>Sent: Wednesday, 11 June 2014, 21:03
>Subject: Re: [boinc_dev] EXIT_TIME_LIMIT_EXCEEDED (sorry, yes me again, but
>please read)
>
>
>
>Another possibil
ved will lower the RAC considerably.
>
>> -Original Message-
>> From: boinc_dev [mailto:boinc_dev-boun...@ssl.berkeley.edu] On Behalf
>> Of McLeod, John
>> Sent: Thursday, June 12, 2014 9:24 AM
>> To: Richard Haselgrove; Eric J Korpela; David Anderson
>&
I agree with your reasoning about the delays or even potential complete failure
of current credit-granting mechanisms, David.
But I feel the statement about Resource Share being expressed in Flops is more
of an assertion than an axiom, and that the volunteer's perspective should be
considered t
This would presumably require the discipline that the libraries are strongly
versioned, so that they can be unambiguously identified?
We do that for the client now, but not for server code (since we lost the
'server stable' designation). Where does the API stand on that spectrum?
>___
That's what the theory says, yes, and I think the design document too. But when
I was testing multithreaded apps at Milkway, I saw the same as YoYo is
reporting now.
>
> From: "McLeod, John"
>To: yoyo ; BOINC Developers Mailing List
>; BOINC Projects
>Sent:
David,
I'm a little confused by today's checkin under this title:
http://boinc.berkeley.edu/gitweb/?p=boinc-v2.git;a=commit;h=e437d098243730d71aa531b31860e549bca303f6
To be useful and meaningful on a result page, this line should show the peak
flop count of the device the task was actually run
Only two requests may have reached your inbox, but there are 12 message board
threads (over 8 BOINC years) at SETI asking how it could be done. That's just
looking at the thread titles - I haven't read them all to see how serious the
intention to delete was.
>
My request is for the Windows operating system, but it may be applicable to
other OSs as well.
Context: I am writing an updated installation program to deliver anonymous
platform applications to volunteers' BOINC installations. Each time we release
an updated version, we observe something of th
Yay! I even get the bitness - "7.2.42 windows_intelx86"
That could be useful later. Thanks.
>
> From: Rom Walton
>To: Richard Haselgrove ; BOINC Dev MailingList
>
>Sent: Thursday, 7 August 2014, 18:20
>Subject: Re: [boinc_dev
The abandoning of tasks happens when the BOINC server 'thinks' that it has
'evidence' that the client has detached from the project and then re-attached
again. This has affected a number of users in the past, but has proved
extremely tricky to diagnose and resolve: not least, because most of the
alfeasance. In earlier reports like this, many (but not all) of the cases
appeared to be associated with long-distance and/or poor quality internet
connections - again, like this one.
>
> From: Eric J Korpela
>To: "McLeod, John"
>Cc: &q
201 - 300 of 424 matches
Mail list logo