Dear Valgrind folks,

please CC me as I am not subscribed.

With Debian Jessie/testing and Valgrind 3.10.0 (package 1:3.10.0-4) to
debug a segmentation fault in wkhtmltopdf [1][2][3], running the
following command,

        # G_SLICE=always-malloc G_DEBUG=gc-friendly xvfb-run valgrind -v 
--tool=memcheck --leak-check=full --num-callers=40 
--log-file=/tmp/20150325--wkhtmltopdf.log wkhtmltopdf http://giantmonkey.de 
giantmonkey.pdf
        Loading page (1/2)
        [===========================================>                ] 73%

it stays at this place for over 24 hours now while one CPU core is
constantly running at 100 %.

The log is still written to and the end looks as below.
        
        […]
        ==1908== TO DEBUG THIS PROCESS USING GDB: start GDB like this
        ==1908==   /path/to/gdb wkhtmltopdf
        ==1908== and then give GDB the following command
        ==1908==   target remote | /usr/lib/valgrind/../../bin/vgdb --pid=1908
        ==1908== --pid is optional if only one valgrind process is running
        ==1908== 
        --1892-- memcheck GC: 6249 nodes, 4986 survivors ( 79.7%)
        --1892-- memcheck GC: 8837 new table size (stepup)
        --1892-- memcheck GC: 8837 nodes, 5101 survivors ( 57.7%)
        --1892-- memcheck GC: 12497 new table size (stepup)
        --1892-- memcheck GC: 12497 nodes, 5654 survivors ( 45.2%)
        --1892-- memcheck GC: 12684 new table size (driftup)
        --1892-- memcheck GC: 12684 nodes, 5657 survivors ( 44.5%)
        --1892-- memcheck GC: 12874 new table size (driftup)
        --1892-- memcheck GC: 12874 nodes, 5657 survivors ( 43.9%)
        --1892-- memcheck GC: 13067 new table size (driftup)
        --1892-- memcheck GC: 13067 nodes, 5657 survivors ( 43.2%)
        --1892-- memcheck GC: 13263 new table size (driftup)
        --1892-- memcheck GC: 13263 nodes, 5657 survivors ( 42.6%)
        --1892-- memcheck GC: 13461 new table size (driftup)
        --1892-- memcheck GC: 13461 nodes, 5657 survivors ( 42.0%)
        --1892-- memcheck GC: 13662 new table size (driftup)
        --1892-- memcheck GC: 13662 nodes, 5657 survivors ( 41.4%)
        --1892-- memcheck GC: 13866 new table size (driftup)
        --1892-- memcheck GC: 13866 nodes, 5657 survivors ( 40.7%)
        --1892-- memcheck GC: 14073 new table size (driftup)
        --1892-- memcheck GC: 14073 nodes, 5657 survivors ( 40.1%)
        --1892-- memcheck GC: 14284 new table size (driftup)
        --1892-- memcheck GC: 14284 nodes, 5657 survivors ( 39.6%)
        --1892-- memcheck GC: 14498 new table size (driftup)
        --1892-- memcheck GC: 14498 nodes, 5657 survivors ( 39.0%)
        --1892-- memcheck GC: 14715 new table size (driftup)
        --1892-- memcheck GC: 14715 nodes, 5657 survivors ( 38.4%)
        --1892-- memcheck GC: 14935 new table size (driftup)
        --1892-- memcheck GC: 14935 nodes, 5657 survivors ( 37.8%)
        --1892-- memcheck GC: 15159 new table size (driftup)
        --1892-- memcheck GC: 15159 nodes, 5657 survivors ( 37.3%)
        --1892-- memcheck GC: 15386 new table size (driftup)
        --1892-- memcheck GC: 15386 nodes, 5657 survivors ( 36.7%)
        --1892-- memcheck GC: 15616 new table size (driftup)

So the table size is increasing as do the nodes, but the survivors
remain constant at 5657.

Is there a chance that this run will give anything useful, so that I
should leave it running? Or is it in a kind of infinite loop and I can
abort it?


Thanks,

Paul


[1] http://wkhtmltopdf.org/
[2] https://github.com/wkhtmltopdf/wkhtmltopdf
[3] https://bugreports.qt.io/browse/QTBUG-41360

Attachment: signature.asc
Description: This is a digitally signed message part

------------------------------------------------------------------------------
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
_______________________________________________
Valgrind-users mailing list
Valgrind-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/valgrind-users

Reply via email to