Hi, all!
I haven't received any replies to my last posting concerning a memory leak
in the "image" module (11/5/99). I realize that just saying "image has a
memory leak!" is probably not all that helpful, so I've done a few more
tests and thought I'd present them here for your opinions...
>From my last posting, I believe that there's some memory leak in the image
display routines, because when I replace the network output with a print
module (instead of graphic display), it runs without any leak. I ran Greg's
suggested test (11/5/99) on some sample networks to verify that this was the
case.
Since then, I've started to look into what cases cause this leak in image,
and have found two main areas:
1) With image's internal cache set to "no cache", there is a constant
increase in memory of ~386 bytes with
repeated "execute once" commands (ie. without changing any inputs to
the network).
2) When using hardware rendering, and changing the network inputs, there
is a continual memory increase.
Let me explain my tests...
I set up a simple network: Construct -> Color -> Image and created a
simple square with the construct (origin=[0 0 0], deltas=[1 1 1], counts=[2
2 2], data=[1 1]). This displays a single square on the screen. Set all
the module cache-settings to "no-cache". Then I ran the network under the
following configurations:
caseA) img internal no-cache, hw rendering
caseB) img internal no-cache, sw rendering
caseC) img internal last-cache, sw rendering
caseD) img internal last-cache, hw rendering
Monitoring the memory usage with a Usage module, I noticed:
For caseA and caseB (no img internal cache), if I just repeatedly "exec
once" the network, the memory usage keeps increasing. Since all the modules
are set to no cache and it's just doing the same thing over and over again,
this shouldn't be the case. For caseC and caseD (img internal cache-last)
there was no increase in usage. Increases in both caseA and caseB were the
same (ie. hw/sw rendering has no effect on this). So it seems like
something's wrong with the img internal-no-cache unless I'm misinterpretting
something here. Note that if you are repeating these tests, you need to
"Reset Server" after you change the img internal cache setting for it to
take effect.
I then decided to test what happened when the network was actually computing
something different for each execution. So I hooked up a sequencer (1-10)
and a compute([a a a]) to the origin-input of my construct to cause the
square to translate in space. Again, the cache values for these are set to
no-cache. I ran tests for the same cases as above. For this test, the
hw-rendering cases showed a large memory increase after each run (~250k) but
the software rendering cases didn't. See the numbers below for details.
Another test that I did was to toggle the sequencer back-and-forth between
two values (say 1 and 2) repeatedly. With modules set to cache none (and
img internal to cache-last), there should be consistant usage every time the
sequencer=1 values is displayed, since the previous image was sequencer=2
always. However, with hw-rendering, this value kept increasing.
So, it's kinda hard to explain with words. But the tests are easy
enough...run them yourselves! If you'd like numbers to compare, here's the
results I got using Greg's test method for the 2 test-types and 4 cases
noted above. The numbers given are the reported memory used after repeated
executions of the same test/case. If there is no memory leak, these values
should stabilize after the first execution or two.
Constant Network State (equivalent to continued "exec once")
---------------------------------------------------------------
caseA: 583264, 587728, 588064, 588400, 588736 (336byte increase per
run)
caseB: 1856176, 1856816, 1857152, 1857488, 1857824 (336byte increase)
caseC: 1784400, 1785600, 1785600, 1785600, 1785600 (stable)
caseD: 585120, 589168, 589168, 589168, 589168 (stable)
Network w/ Sequencer 1-10
--------------------------
caseA: 1082848, 1332112, 1581456, 1830800, 2080144 (249344 increase)
caseB: 3354352, 3357920, 3361280, 3364640, 3368000 (3360 increase)
caseC: 3368048, 3368048, 3368048, 3368048, 3368048 (stable)
caseD: 2342768, 2589088, 2834992, 3080976, 3326960 (245984 increase)
A couple more comments about those numerical results...As I mentioned above,
with img-cache set to none, memory usage increases with repeated network
executions. For the hw-rendering cases (A and D), there is large memory
usage increases in the sequencer test. Note that for the sequencer test in
caseB, the memory increase is equal to 10 times it's "leak" in the "exec
once" test. This implies there is no extra leak besides that constant 336
bytes for each network execution. Also if you look at the difference
between caseD and caseA (both hw-rendering), you'll see it's also 3360
bytes, which is 10x the "exec once" leak... this is consistant with my
observed difference between img internal cache=none and cache=last.
I applied the results of this test to my original network that was giving me
headaches...it's fairly large, with like 12 pages and several macros. With
it's internal sequencer set from 2-10 (9 steps), I get the following memory
usage numbers:
img internal no-cache, hw render (my original setup)
5063248, 8728240, 12391376, 16055744, 19720368 (~3664k increase per
run (9steps))
img internal last-cache, sw render (my "new" setup)
5127600, 5199904, 5 125968, 5126848, 5125728 (<100k variation per
run...generally more like ~2k)
So it looks to me like there's something odd in the img-internal-cache-none
and hardware-rendering display code. You can also repeat the results on the
sample networks by changing the image module to internal cache=none and
repeatedly "exec once" them. I originally ran these tests on an SGI Irix6.5
using the 4.0.5 binaries, but have also verified them on my NT running the
4.0.6 binaries.
Any suggestions would be greatly appreciated, but for now it looks to me
like I need to run in SW-rendered, img-internal-cache-last if I want to
avoid memory leaks. Which is too bad, 'cause hw-rendering on an SGI is
nice. =) I also encourage you to repeat the tests yourself, to catch if I
did something dumb (quite possible).
Thanks for the input!
Jeremy Zoss
Southwest Research Institute
[EMAIL PROTECTED]