On Wed, Sep 02, 2009 at 08:04:43PM -0700, David Anderson wrote:
Maybe we should add mechanisms to the server software so
that render it inoperative unless the project admin
has addressed the basic security issues.
E.g. nothing works if html/ops is unprotected,
For that you should parse the
Is there another variable to determine the time a workunit has spend
on the GPU? For whatever reason our CUDA applications are reporting a
very small cpu_time value, I'm wondering if this is some issue we have
in our application, or if the BOINC client is reporting it wrong...
Travis Desell wrote:
Is there another variable to determine the time a workunit has spend
on the GPU? For whatever reason our CUDA applications are reporting a
very small cpu_time value, I'm wondering if this is some issue we have
in our application, or if the BOINC client is reporting it
It would be really helpful to be able to know the elapsed_time of a
result, mainly to see exactly how fast these workunits are crunching
(and if anyone is trying to fake results). Maybe add something like
an elapsed_time or gpu_time field to the RESULT struct. Or maybe
just change
It would be really helpful to be able to know the elapsed_time of a result,
mainly to see exactly how fast these workunits are crunching (and if anyone is
trying to fake results). Maybe add something like an elapsed_time or gpu_time
field to the RESULT struct. Or maybe just change cpu_time
No, the client doesn't measure GPU time at all. Is it even possible
to get that information from nvidia APIs?
Enviado desde mi iPod
El 03/09/2009, a las 10:51, Travis Desell des...@cs.rpi.edu escribió:
Is there another variable to determine the time a workunit has spend
on the GPU? For
El Jue 03 Sep 2009 12:05:15 Richard Haselgrove escribió:
I would urge the creation of a second field, rather than re-cycling the
existing field for a different purpose. The ratio of the two times is also
an interesting and potentially useful figure - CPU overhead of GPU app,
effective number
Nicolás Alvarez wrote:
No, the client doesn't measure GPU time at all. Is it even possible to
get that information from nvidia APIs?
I couldn't find such an API.
The flow of a CUDA app is like this:
a) the CPU part launches a GPU kernel and sleeps
b) the kernel executes; when it's done, the
We don't rely on it (we use other more robust verification as well),
but comparing the elapsed time (which can be faked) to the round trip
time (which can't be faked) is a pretty dirty and easy way to tell if
something is up. Many of the workunits on our project are exploratory
and don't
I'm pretty sure there's a way to get the time a CUDA application
spends on the GPU, it's in the API somewhere.
On Sep 3, 2009, at 11:51 AM, Nicolás Alvarez wrote:
No, the client doesn't measure GPU time at all. Is it even
possible to get that information from nvidia APIs?
Enviado desde
El Sáb 22 Ago 2009 17:54:07 Nicolás Alvarez escribió:
El Jue 11 Jun 2009 11:04:13 Kathryn Marks escribió:
On Thu, Jun 11, 2009 at 10:37 PM, Jorden van der Elst
els...@gmail.comwrote:
Hi Rom,
Why did all of the 6.6.36 versions go out as recommended versions
without even testing
I checked in changes that add the following fields to the result table:
double elapsed_time: the time interval during which the application ran.
double flops_estimate: the scheduler's original estimate of the speed
of the application in FLOPS.
For app versions with a plan class, this is
12 matches
Mail list logo