On Fri, Oct 24, 2008 at 08:29, Ioannis Papadopoulos
<[EMAIL PROTECTED]> wrote:

>
> I should have written "task scheduling and management". Yes, the threads are
> OS handled but everything else has to be handled by a runtime system. This
> means using some efficient way one has to create and schedule tasks to
> achieve maximum speedup. Moreover, the threads have to be able to be reused
> for different tasks, taking care about cache affinity etc. - these are all
> handled rather successfully by OpenMP.

Well, not really. In my experience OpenMP is good when you have few
parallel sections, because the overhead for entering/leaving parallel
sections is too high. And that's not the case with mesa.

>>>
>>> 3) OpenMP is easy even for beginners in parallel processing
>>> 4) you can always remove OpenMP by telling the compiler that you don't
>>> want support (ok, even for pthreads you can do that, but manually)
>>>
>>>
>>
>> Well, in my experience, OpenMP also has a number of disadvantages.
>> Granted it is very easy to write your first parallel program with it,
>> but it is also extremely fragile, because the parallel semantics can
>> be left implicit (which in my experience leads to a high number of
>> bugs that are difficult to find). Basically, once you tackle bigger
>> programs, you end up adding a lot of shared/private statements
>> everywhere, which becomes untractable inside big codes like mesa.
>>
>
> Likewise, the pthread solution needs to put the locks in the correct places,
> use volatile etc The worse is that you have to maintain it as well.

Well, my opinion is that once you're at the level where you've
separated shared variables from private ones, I think you might as
well use pthreads.

(And FWIW, using volatile in threaded code means you've lost already.
volatile often means spinlocking. volatile doesn't enforce atomicity
so you also have to use memory barriers. as a rule of thumb one should
never use volatile for variable sharing in threaded code)

>
> Although normally I'm not a very strong advocate for OpenMP, in this case I
> think it fits the task completely: I tend to see OpenGL as just operations
> on matrices, which means that data parallelism is there and can be best
> exposed using OpenMP.

Well, I agree here, my experience with OpenMP tells me that I wouldn't
want to deal with it for anything else than very constrained and
simple pieces of code (like you say, for example large matrix
multiplies). But the problem is that OpenGL is quite far from
operations on matrices, the complexity is above that; if mesa was only
doing matrix operations, we'd have it written in linpack already. For
example the cache friendlyness is more complex when you access
textures and vertex attributes, and then OpenMP fails big time...

Stephane

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev

Reply via email to