Hello, If I have.. data Path = L Path | R Path | T paths = T : branch paths branch (p:ps) = L p : R p : branch ps
This will be a CAF which can never be garbage collected, but may grow indefinitely large as it gets reduced. Correct? Is it possible to avoid this problem somehow? What I have in mind is a special thunk which get's copied (as a normal thunk) before it gets reduced. Pointer references are preserved during copying, except self referential pointers, which are updated to point to the new thunk. Alternatively.. I had thought maybe a function with a dummy argument like this would do.. paths x = let paths' = T : branch paths' in paths' But the paths' will just be lambda lifted as a CAF (I believe) This seems to solve the lambda lifting problem.. paths x = T : branch (paths x) but I'm not sure how compilers will treat this. I think I'll also loose the sharing of earilier paths in new paths, unless the compiler optimises the self reference. If I redefined my datatype.. data Path x = L (Path x) | R (Path x) | T x I could then use the function with dummy argument solution.. paths x = let paths' = T x : branch paths' in paths' This seems to solve both problems, but there's yet another problem I anticipate.. When using any of these dummy argument solutions I have to make sure the argument is not a constant (or I'm back to the lambda lifting problem again). So it has to be any handy unknown variable (argument)? This presents the problem that the unknown variable may itself be quite large, and have it's lifetime unduly prolonged because it's now referenced by many paths. Any advice? Thanks -- Adrian Hey _______________________________________________ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell