> >         T1                T2               T3             T4
> >     acquire(&x)       acquire(&x)       read(abc)      write(abc)
> >     write(abc)        read(abc)         y = abc*18     z =42/abc
> >     release(&x)       release(&x)       ... whatever   ..
> >
> > Darryl (I think) wants to see is something like:
> >
> >     while in T1's C.S. `x', the variable `abc':
> >         saw a write from T1 of size whatever
> >         saw a write from T4 of size whatever
> >         saw a read from T3 of size whatever
> >         saw a read from T4 of size whatever

Getting a trace of a particular memory location is trivial; either hack
the Lackey tool a bit, or simply use the following flags to Helgrind
(I think DRD has similar flags):

  --trace-addr=0xXXYYZZ     show all state changes for address 0xXXYYZZ
  --trace-level=0|1|2       verbosity level of --trace-addr [1]

The rest of the long discussions, re scheduling, I don't understand the
purpose of.  The normal thing to do w.r.t. thread debugging, once you
have a log of which thread accessed which location in which order, is
to feed that stuff into one of several by-now standard race detection
algorithms.  Either a pure happens-before algorithm, as exemplified by
DRD and Helgrind in 3.4, or one based on locksets, of which Helgrind in
3.3 partially exemplifies.

What does messing with the scheduling get you that standard race
detection algorithms don't?

J

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Valgrind-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/valgrind-users

Reply via email to