2010/7/1 Simon Marlow <marlo...@gmail.com>:
> So one option is to make a new CCS root for each thread, and that way each
> thread would end up creating its own tree of CCSs.  That would seem to work
> nicely - you get per-thread stacks almost for free. Unfortunately it's not
> really per-thread profiling, because if one thread happens to evaluate a
> thunk created by another thread then the costs of doing so would be
> attributed to the thread that created the thunk (maybe that's what you want,
> and maybe it's consistent with the CCS view of the world, I'm not sure).
>  This also means you still need to lock access to the CCS structures,
> because two threads might be accessing the same one simultaneously.

Yes, I was thinking about the new CCS root you mention. To solve the
sharing problem I was thinking about storing CCS ids instead of
pointers and make each thread allocate a new CCS each time it finds a
new id. I think it shouldn't be much work as there shouldn't be that
many shared CCSs.

> Do you really want per-thread profiling, anyway?  What happens when there
> are thousands of threads?

Anyway I think I can provide valuable information by doing offline
analysis of the data or hiding useless information. Also, integration
with threadscope of similar tools will only look at the information
relevant to what is being displayed, in the same way you use different
zoom levels.

>> I didn't know about this. I've done some really small tests and the
>> overhead of the HPC system seems lower the the one from the profiling
>> system. The problem I see is that it may have to be changed a lot to
>> comply with the cost centre semantics (subsuming costs and the like).
>
> Well, it would be a completely different cost semantics.  Whether that's
> good or bad I can't say.
>
> Cheers,
>        Simon
>

_______________________________________________
Cvs-ghc mailing list
Cvs-ghc@haskell.org
http://www.haskell.org/mailman/listinfo/cvs-ghc

Reply via email to