Hi all,

I have a piece of code in a project that I am working on that has a memory 
leak only when performing a parallel evaluation of a function. The general 
idea here is that I have a few variables that are large and static they are 
created on the work nodes before doing the actual analysis. There are then 
a few variables that are change during my outer loop and they are updated 
on the workers from my master. The workers then perform a loop for a 
function call which updates the a results variable on the workers. I 
created a few helper functions to make it a bit easier to evaluate an 
expression on the worker nodes.

The problem I am finding is that if I include my "export" of the changing 
variable and the for loop with my function call in one expression that is 
run on the workers I get a memory leak on the works. However, if I separate 
these two remote calls the memory leak is no longer present. I'm not 
completely sure what the difference is here.

I've made a gist that give an example of the problem (just the basic 
operations that happen) but also replicates what I am seeing:

https://gist.github.com/dwil/074fdaf96792eb963854

Does anyone have any insights as to what may be going on here?

Regards,
Duane

Reply via email to