Hi!
> Use small sleep for lack of better way to check that all threads
> are blocked on mutex "mutex".
> 
> Signed-off-by: Jan Stancek <[email protected]>
> ---
>  .../interfaces/pthread_attr_setschedpolicy/2-1.c   |    3 +++
>  1 files changed, 3 insertions(+), 0 deletions(-)
> 
> diff --git 
> a/testcases/open_posix_testsuite/conformance/interfaces/pthread_attr_setschedpolicy/2-1.c
>  
> b/testcases/open_posix_testsuite/conformance/interfaces/pthread_attr_setschedpolicy/2-1.c
> index 80ce906..1f8825a 100644
> --- 
> a/testcases/open_posix_testsuite/conformance/interfaces/pthread_attr_setschedpolicy/2-1.c
> +++ 
> b/testcases/open_posix_testsuite/conformance/interfaces/pthread_attr_setschedpolicy/2-1.c
> @@ -161,6 +161,9 @@ int main(void)
>       if (rc)
>               FAIL_AND_EXIT("create_thread HIGH", rc);
>  
> +     /* give threads a moment so they can block on mutex "mutex" */
> +     sleep(2);
> +
>       rc = pthread_mutex_unlock(&mutex);
>       if (rc)
>               FAIL_AND_EXIT("pthread_mutex_unlock()", rc);

Hmm, so you did hit the small window for race condition between the new
thread signals the main thread that it's executed and the next call to
the mutex_lock on the tested mutex?

I do not like this solution much, but this is not easy to do properly.
One posibility is to pinpoint the threads on one cpu via the affinity()
interface (open_posix_testsuite/include/affinity.h) then we can wait in
the main thread until the thread with lowest priority is executed and
safely say that the rest is locked on the mutex allready (as they run
with FIFO scheduling).

-- 
Cyril Hrubis
[email protected]

------------------------------------------------------------------------------
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register >
http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk
_______________________________________________
Ltp-list mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/ltp-list

Reply via email to