Sorry for taking so long to get back to you @andrewfayres 

I pulled down your fix from your repo and ran it both on my OSX machine in CPU 
mode and in a nvidia-docker container 
(https://hub.docker.com/r/jessebrizzi/dl-dev/) in linux for GPU mode and I 
still think a memory leak issue is still present. 

I have update my bug reproduction repository 
[here](https://github.com/jessebrizzi/MXNet-Bug) to show the exact behavior I 
am testing. 

The memory leak that I am still observing seems to be independent of the binded 
network size. I tested this by running through various max batch sizes that I 
would randomly sample from for my test input and regardless of what I set after 
10000 iterations the native memory growth seems to be around the same.

The fix does seem to address the upsize vs downlise issue observed earlier.

I would still suggest, to maintain parity with the python interface, that the 
resize logic should be switched to the backend method that the python interface 
uses, but I know now messing with a JNI interface change is a pain. 

Could you post an example of the code you where running to test the change? 

[ Full content available at: 
https://github.com/apache/incubator-mxnet/issues/10867 ]
This message was relayed via gitbox.apache.org for [email protected]

Reply via email to