On 10/8/12 17:36 , Richard Haselgrove wrote:
> 'Leave apps in memory' hardly applies to GPU tasks

Exactly, that's how it should be. From my understanding GPU tasks are
not suspended anyway, but rather always terminated, to free precious GPU
memory. Please correct me if this assumption is incorrect.

David, what's the actual mechanism used to "suspend" GPU tasks? Also,
why does the client fail to detect that a supposedly "suspended" task is
still running (and subsequently take care of it)?

> Do you have proper "critical section" protection around kernel launches

No, and from our app's ignorant perspective it shouldn't be necessary
since it's robust enough to not produce invalid results because of such
action. However, it might be useful to add these anyway to "protect" the
potentially fragile GPU runtime/driver ecosystem from harm.

> and thread synchronisation?
>     "Without thoroughly having looked for the usual thread safety
>     issues, I did notice that there is a distinct absence of any
>     explicit synchronisation.  That implies the same thread safety
>     issues will likely be there, on top of some driver & Cuda runtime
>     issues that really require at least some
>     explicit synchronisation present to avoid.

Honestly, I have no idea what kind of missing thread synchronization
he's talking about. CUDA's in-kernel thread synchronization is used
where necessary and is completely unrelated to this problem. CUDA kernel
launches are implicitly synchronized by the API calls we use. Where
kernels are launched asynchronously we do use a context synchronization
barrier, and there's no other multi-threading other than CUDA itself.


Cheers,
Oliver

_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to