A specific instance where the cooling solution is inadequate to support 
continuous 90% utilization of a resource is IMO no reason to exclude that 
particular GPU. There have been 8400 GS cards doing useful work for S@H since 
January 2009, and I would guess that 6.08 CUDA app_version for 
setiathome_enhanced has a similar GPU utilization.

Some way for BOINC to monitor resource temperatures would be ideal, but I doubt 
it's practical. Utilities like GPU-Z are not totally reliable even though 
they're updated frequently. It's temperature readings for your 8400 GS are 
probably correct, but even the latest version 0.6.8 released 9 days ago 
provides a GPU temperature reading for the HD 7660G GPU portion of the AMD 
A10-4600M APU in my Lenovo laptop which is clearly wrong.
--
                                                                 Joe



On Sat, 16 Mar 2013 05:49:48 -0400, <[email protected]> wrote:

While testing an HP m9047c (completely stock hardware - never overclocked)
for boinc alpha, I upgraded some drivers and somehow 7.0.56 of the boinc
client started detecting that it could run OpenCL jobs on the machine
which has a "NVIDIA GeForce 8400 GS (256MB) driver: 314.07" GPU which had
previously gone undetected until a series of driver updates. I was
surprised that with so little memory, Seti assigned it 2 "AstroPulse v6
v6.04 (opencl_nvidia_100)" tasks.

Seti machine: 4719778
http://setiathome.berkeley.edu/show_host_detail.php?hostid=4719778

Fortunately, through just blind good luck, I was on the machine when the
huge Seti download finally finished and watched to see how it did. It was
working ok in the boinc manager but I decided to see what was happening
with GPU-z. It was reaching over 90% GPU utilization and about 48% memory
bandwidth utilization. However, after watching the temperature for the GPU
chip climb through 107 degrees C, I suspended GPU processing and set the
<no_gpus> flag in cc_config.xml. I aborted the running job and the second
job aborted with a status of "201 (0xc9) EXIT_MISSING_COPROC". I restarted
the boinc client.

Old nVidia chips are known in the trade press as having problems at high
temps because of a mismatch in the internal expansion properties resulting
in breakage. I know this was mentioned for the 65nm and 55nm chips in a
2008 article but I don't know about these chips, which are 80 nm IIRC. You
can read some reprints of the bumpgate articles starting at the address
below.

http://semiaccurate.com/2010/07/11/why-nvidias-chips-are-defective/

If it was me, I'd refuse to let the boinc client recognize these chips as
usable GPUs.

David Ball
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to