i've been away from julia for a while so am not up-to-date on changes, and 
am looking at an odd problem.

i have some code, which is messier and more complex than i would like, 
which is called to print a graph of values.  the print code uses tasks.  in 
0.3 this works, but in 0.4 the program sits, using no CPU.

if i dump the stack (using gstack PID) i see:

Thread 4 (Thread 0x7efe3b6bb700 (LWP 1709)):
#0  0x00007f0042e7e05f in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
#1  0x00007efe3bf62b5b in blas_thread_server () from 
/home/andrew/pkg/julia-0.4/usr/bin/../lib/libopenblas64_.so
#2  0x00007f0042e7a0a4 in start_thread () from /lib64/libpthread.so.0
#3  0x00007f004231604d in clone () from /lib64/libc.so.6
Thread 3 (Thread 0x7efe3aeba700 (LWP 1710)):
#0  0x00007f0042e7e05f in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
#1  0x00007efe3bf62b5b in blas_thread_server () from 
/home/andrew/pkg/julia-0.4/usr/bin/../lib/libopenblas64_.so
#2  0x00007f0042e7a0a4 in start_thread () from /lib64/libpthread.so.0
#3  0x00007f004231604d in clone () from /lib64/libc.so.6
Thread 2 (Thread 0x7efe3a6b9700 (LWP 1711)):
#0  0x00007f0042e7e05f in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
#1  0x00007efe3bf62b5b in blas_thread_server () from 
/home/andrew/pkg/julia-0.4/usr/bin/../lib/libopenblas64_.so
#2  0x00007f0042e7a0a4 in start_thread () from /lib64/libpthread.so.0
#3  0x00007f004231604d in clone () from /lib64/libc.so.6
Thread 1 (Thread 0x7f0044710740 (LWP 1708)):
#0  0x00007f0042e8120d in pause () from /lib64/libpthread.so.0
#1  0x00007f0040a190fe in julia_wait_17546 () at task.jl:364
#2  0x00007f0040a18ea1 in julia_wait_17544 () at task.jl:286
#3  0x00007f0040a40ffc in julia_lock_18599 () at lock.jl:23
#4  0x00007efe3ecdbeb7 in ?? ()
#5  0x00007ffd3e6ad2c0 in ?? ()
#6  0x0000000000000000 in ?? ()

which looks suspiciously like some kind of deadlock.

but i am not using threads, myself.  just tasks.

hence the question.  any pointers appreciated.

thanks,
andrew

Reply via email to