> > You are right. But lots of code uses condition variables. I would like
to
> > use helgrind on several code parts that use condition variables, but
> > currently the false positive count is too high.
Oh yes!
> Thanks for the feedback. I might be able to improve the situation, but
> that will have to wait until after 3.3.0 is released now.
That''l be great!
I've modified my test (attached, q2.cc), hope it will be helpful :)
It now has N worker threads. If N >= 2 the race is reported even for GLOB1.
GLOB[12] are now read-only in workers.
As I understand, helgrind currently can transfer Exclusive(worker) ->
Exclusive(parent) but can not transfer
ShRO(worker1,worker2)->Exclusive(parent) in presence of cond_wait().
I did not find VALGRIND_HG_POST_WAIT anywhere in valgrind nor in the net.
Is it supposed to be used like this?
#include "helgrind.h"
...
pthread_mutex_lock(&MU);
while (COND != n_tasks) {
pthread_cond_wait(&CV, &MU);
}
VALGRIND_HG_POST_WAIT(&CV)
pthread_mutex_unlock(&MU);
Thanks,
--kcc
On Dec 6, 2007 12:08 AM, Julian Seward <[EMAIL PROTECTED]> wrote:
>
> > On Wednesday 05 December 2007 18:10, Christoph Bartoschek wrote:
> > [...]
> >
> > You are right. But lots of code uses condition variables. I would like
> to
> > use helgrind on several code parts that use condition variables, but
> > currently the false positive count is too high.
>
> Thanks for the feedback. I might be able to improve the situation, but
> that will have to wait until after 3.3.0 is released now.
>
> J
>
> -------------------------------------------------------------------------
> SF.Net email is sponsored by: The Future of Linux Business White Paper
> from Novell. From the desktop to the data center, Linux is going
> mainstream. Let it simplify your IT future.
> http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
> _______________________________________________
> Valgrind-developers mailing list
> Valgrind-developers@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/valgrind-developers
>
#define _MULTI_THREADED
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
#include <queue>
#include <vector>
// This test case tries to be a minimal reproducer of thread pool usage.
//
// We have several permanently working threads (worker()) which take
// functions (callbacks) from somewhere (add_callback()) and execute them.
// Thease threads do not cary any state between callbacks.
// So, logically this is the same as if we destroyed a worker thread and recreated it
// each time we have a new callback.
//
// Helgrind can hardly understand this by itself,
// but a simple source annotation should help.
// See ANNOTATE_BEGINNING_OF_CALLBACK/ANNOTATE_END_OF_CALLBACK
//
// This test program has 3 global variables, GLOB1, GLOB2, GLOB3
// GLOB1: helgrind reports a race only if (n_workers >= 2)
// GLOB2: helgrind reports a race but ANNOTATE_... should help.
// GLOB3: helgrind reports a race and ANNOTATE_... will not help. Can this be fixed by other means?
int COND = 0; // condition for cond_wait() loop
pthread_mutex_t MU = PTHREAD_MUTEX_INITIALIZER; // for CV
pthread_cond_t CV = PTHREAD_COND_INITIALIZER; // works with MU
typedef int (*F)(void);
pthread_mutex_t MU_Q = PTHREAD_MUTEX_INITIALIZER; // for callbacks
static std::queue<F> callback_queue; // protected by MU_Q
// not protected by locks, synchronized via cond_wait()
static int GLOB1 = 0, GLOB2 = 0;
// first, protected by MU, then synchronized via cond_wait()
static int GLOB3 = 0;
// execute untill one of callbacks returns 1
void *worker(void *parm)
{
int stop = 0;
F f;
while(!stop){
pthread_mutex_lock(&MU_Q);
if(!callback_queue.empty()){
f = callback_queue.front();
callback_queue.pop();
}else{
f = 0;
}
pthread_mutex_unlock(&MU_Q);
if(f){
// ANNOTATE_BEGINNING_OF_CALLBACK (helgrind's client request here)
stop = f();
// ANNOTATE_END_OF_CALLBACK (helgrind's client request here)
}
usleep(1000); // don't burn CPU. Real thread pool will block instead of sleeping.
}
return NULL;
}
void add_callback(F f)
{
pthread_mutex_lock(&MU_Q);
callback_queue.push(f);
pthread_mutex_unlock(&MU_Q);
}
int callback_do_something_usefull()
{
// TODO replace this sleep with VALGRIND_HG_POST_WAIT(&CV) near cond_wait()
sleep(1); // we want waiter to get there first.
int val1 = GLOB1; // no race is reported here
int val2 = GLOB2; // no race is reported here
pthread_mutex_lock(&MU);
GLOB3 = val1 + val2;
pthread_mutex_unlock(&MU);
pthread_mutex_lock(&MU);
COND++;
printf("Signal: %d\n", COND);
pthread_cond_signal(&CV);
pthread_mutex_unlock(&MU);
return 0; // continue
}
int callback_finish_thread()
{
return 1; // stop
}
// ./a.out [N_WORKERS [N_TASKS]]
int main(int argc, char **argv)
{
int n_workers = 2;
int n_tasks = 20;
if(argc >= 2){
n_workers = atoi(argv[1]);
}
if(argc >= 3){
n_tasks = atoi(argv[2]);
}
assert(n_workers <= n_tasks);
std::vector<pthread_t> workers;
// accessed before pthread_create()
GLOB1 = 1;
for(int t = 0; t < n_workers; t++){
pthread_t threadid;
pthread_create(&threadid, NULL, worker, NULL);
workers.push_back(threadid);
}
// accessed after pthread_create() but before ANNOTATE_BEGINNING_OF_CALLBACK
GLOB2 = 1;
for(int i = 0; i < n_tasks; i++){
add_callback(callback_do_something_usefull);
}
pthread_mutex_lock(&MU);
GLOB3 = 1;
pthread_mutex_unlock(&MU);
// now we wait untill callback_do_something_usefull() signals.
pthread_mutex_lock(&MU);
while (COND != n_tasks) {
printf("Wait : %d\n", COND);
pthread_cond_wait(&CV, &MU);
}
pthread_mutex_unlock(&MU);
// at this point callback_do_something_usefull() has signalled
// and will never write to any of these variables again.
// I beleive that worker() can be removed from TSETs of
// all tree variables.
//
// It is still possible that callback_do_something_usefull()
// did not exit yet and we did not execute ANNOTATE_END_OF_CALLBACK.
GLOB1 = 2; // helgrind reports a race if n_workers >= 2
GLOB2 = 2; // helgrind reports race here
GLOB3 = 2; // helgrind reports race here
fprintf(stderr, "GLOB1: %d\n", GLOB1);
fprintf(stderr, "GLOB2: %d\n", GLOB2);
fprintf(stderr, "GLOB3: %d\n", GLOB3);
// real program will give more usefull callbacks here
// kill workers
for(int t = 0; t < n_workers; t++){
add_callback(callback_finish_thread);
}
// join
for(int t = 0; t < n_workers; t++){
pthread_join(workers[t], NULL);
}
return 0;
}
-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell. From the desktop to the data center, Linux is going
mainstream. Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
_______________________________________________
Valgrind-developers mailing list
Valgrind-developers@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/valgrind-developers