One possible bandaid is to use the @fork decorator and do the computation 
in a separate process. Then only pull back in the result without recreating 
any intermediate parents:

http://www.sagemath.org/doc/reference/sage/parallel/decorate.html



On Tuesday, November 6, 2012 10:44:42 AM UTC-5, Ben wrote:
>
> I tried searching for an older posting on this without any luck, and I'm 
> sure it's been discussed before. The closest I could come up with it this:
>
>
> https://groups.google.com/forum/?fromgroups=#!topic/sage-support/FvrXRUuhy1Q
>
> which pretty much describes the issue I'm encountering. However, the 
> memory question was not addressed, rather, a way to circumvent the issue 
> was provided.
>
> I'm doing a similar thing: I have a particular set I can compute for a 
> given dynamical system, then I wish to do this for many dynamical systems 
> just storing the small amount of data that is the result. Each computation 
> takes a small amount of memory, however, even with 16Gb of memory it is 
> quickly running out of memory in some thousands of iterations. My best 
> guess is that Sage/Python is caching/storing information from the previous 
> computations. Is there a way to clear this and essentially have a "clean 
> slate" for the next iteration? (I'd like to be doing millions or billions 
> of such computations...)
>
> I'd post my code except that it isn't a nice simple snippet. It involves a 
> couple experimental patches and the computation is actually quite involved.
>
> Thanks,
>   Ben
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-support" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
Visit this group at http://groups.google.com/group/sage-support?hl=en.


Reply via email to