I built a small application using PyQt4 and pyqtgraph to visualize some
data. The app has 32 graphs that plot deques of size 512. The plots are
updated when 200 ints are cycled through each deque.

The plotting slows down in a linear manner with respect to time. In
other words after cycling through 100,000 data points the time between
calls to process events is much longer than it was at T0.

I have done a little memory profiling. Watching the process on top, it's
clear that there is some memory leak. I also tried invoking
objgraph.show_most_common_types(). This test reveals that the number of
objects being created plateaus, except for weakref objects, which keep
growing and growing.

I have come to believe that the growing number of weakrefs is slowing
down execution. Is my analysis misguided? How can I introspect further?
If the slowdown can be attributed to weakref escalation, what are some
next steps?

Thanks,
Kevin

Attachment: 0x8A61431E.asc
Description: application/pgp-keys

Attachment: signature.asc
Description: OpenPGP digital signature

-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to