Finalizers: the Legend that Won't Die
Manuel wrote (snipped) I have just read Hans Boehm's POPL paper on finalizers. His suggestion for the use of finalizers in single-threaded systems is to provide a `runFinalizers' routine, instead of relying on the asynchronous execution that, as established, requires support for concurrency. I am not sure whether we have given this option really serious consideration. I can only see one problem with this. Suppose you have the following sequence of events: let x = seq (unsafePerformIO runFinalizers) y finalizer = [... something involving x ...] [... attach finalizer to something ..] [... something becomes free, putting finalizer on the to-be-done list ...] [.., computation requiring x ...] Then during the computation of x the following things are liable to happen: (1) Evaluation of x begins. x is blackholed. (2) runFinalizers is invoked. (3) finalizer is begun. (4) finalizer needs value of x. Ooops, what do we do now? Is it safe to run runFinalizers anywhere except in the main IO trunk? Otherwise you always have the risk of coming a cropper on blackholed values, don't you? George ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: haskell finalisers driven underground
I think essentially there are only three ways you can have Haskell finalizers. (1) every FFI implementation must implement concurrency. I'm not convinced it must be preemptive, but at the very minimum you need some sort of concurrency, since you have some arbitrary number of finalizers (plus the main thread) all running together, all sharing the same space of potentially half-way evaluated values, and so all needing the ability to go to sleep and wake up whoever is half-way through evaluation. This is a problem for NHC, and possibly also for Hugs, so can't be done. However it is how GHC does it. (2) allow values to be evaluated more than once. This breaks various important compiler optimisations, and so right now is not an option for anyone. (3) delay the finalizers until there are no half-way-evaluated values. For example, for non-concurrent Haskells, immediately before or after a primitive IO action in the main trunk of a program (not inside unsafePerformIO !). This would work fine for many people, like me, but those purists who do very long pure computations without ever troubling the IO monad, impurists who run lengthy calculations inside unsafePerformIO, or statists who do lengthy computations in the state monad, may be surprised by the long delays before their finalizer gets run. At the moment the consensus seems to be not to have Haskell finalizer in the standard, but leave them up to the implementation. I think I'm secretly hoping NHC will implement (1) or (3), forcing Hugs to bow to market forces. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Objections to runAtomically
I don't like runAtomically either, because once again it assumes a global lock. This is fine for GHC or Hugs or NHC on single processors, but it would be a pain if you had multiple processors. You can't avoid locking between processors altogether of course; atomicModifyPVars and the unsafePerformIO hack I just posted require data to be updated consistently (you don't want the same unsafePerformIO being evaluated twice). However that is not so bad because the locking is only on particular bits of data; it's reasonable that access to particular thunks or PVars will be owned by one processor or another, so that if two processors are working on independent bits of state they can chug along perfectly happily without locking each other, and only need to talk when one processor wants to get at a bit of state owned by another. Although Alastair suggests compilers might try to spot when runAtomically can be used safely without stopping all but one processor, I think analysis of IO actions to check for potential conflicts might be even harder than getting Hugs to run Haskell finalizers 8-) It would require region analysis and probably worse. However we don't really need to discuss this anyway, since I don't think either runAtomically or atomicallyModifyIORef need to be in the FFI standard. I'm quite happy to leave this open. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: The Revenge of Finalizers
Simon Marlow wrote: I don't know how to achieve the same goal with atomicModifyIORef. I do. To modify ioRef1 and ioRef2 simultaneously, write atomicModifyIORef ioRef1 (\ contents1 - unsafePerformIO ioRef2 (\ contents2 - blah blah)) The actual modification will take place when the result or contents of ioRef1 or ioRef2 get evaluated. I don't understand how this works. Unless I'm missing something, you'll see the contents of ioRef1 at the point at which the first atomicModifyIORef takes place, but the contents of ioRef2 from the time at which the unsafePerformIO is evaluated, which is some unspecified time later. Perhaps we're talking about different sorts of atomicModifyIORef? I am now thinking of atomicModifyIORef as having the type IORef a - (a - (a,b)) - IO b Thus when it is called the effect is to construct (1) a thunk of type (a,b) containing the application of the function to the thunk previously in the IORef; (2) a thunk containing fst $ the thunk from (1) of type (a,b), which is put in the IORef; (3) a thunk containing snd $ the thunk from (1), to be returned. Subsequent evaluation of the result of atomicModifyIORef will force the thunks in (1) and (3) to be evaluated, meaning the unsafePerformIO gets evaluated. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: The Revenge of Finalizers
Simon Marlow wrote: I'd hoped that blockFinalizers would be useful for defining other primitives but since it won't even work for GHC, I agree that PVar will meet most of our needs. (An even simpler design might be to extend our IORef implementations with 'atomicallyModifyIORef'.) So, is this a design that we could agree on? I like it. I'd vote for 'atomicModifyIORef' rather than a new PVar type, though. Yes, I think I would too. So that's the end of PVars. Just to check, is there any problem implementing atomicModifyIORef :: IORef a - (a - (a,b)) - IO b? (Especially for NHC?) Because that's the type I would like it to have. I can see a number of applications for this function . . ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: The Revenge of Finalizers
Alastair Reid wrote: Alastair: So, is this a design that we could agree on? SimonM: I like it. I'd vote for 'atomicModifyIORef' rather than a new PVar type, though. Ok, onto the second question: Can we use atomicModifyIORef to make our code finalizer-safe? I see potential problems wherever two IORefs need to be modified atomically. Obviously, it's easy enough to change code like this: foo :: (IORef a, IORef b) - IO () foo (ref1,ref2) = do modifyIORef ref1 f modifyIORef ref2 g to foo :: IORef (a,b) - IO () foo ref = do modifyIORef ref (f `cross` g) More difficult would be something composite objects where multiple IORefs need to be updated 'at once'. With MVars, you'd use a single MVar as the lock for the whole object and then use IORefs for mutable bits within the tree. You'd use a similar approach with a construct like blockFinalizers. I don't know how to achieve the same goal with atomicModifyIORef. I do. To modify ioRef1 and ioRef2 simultaneously, write atomicModifyIORef ioRef1 (\ contents1 - unsafePerformIO ioRef2 (\ contents2 - blah blah)) The actual modification will take place when the result or contents of ioRef1 or ioRef2 get evaluated. It's horrible of course, but we've got unsafePerformIO in the FFI standard so why not? You cannot get deadlocks (because atomicModifyIORef won't deadlock), but instead you can get non-termination. GHC at least detects this non-termination and produces an Exception: loop or something of the sort. To avoid the non-termination, you need to order the ioRefs in some way or use some similar strategy, just as you'd have to for MVars. In fact for UniForM this strategy looks more reliable than our existing one of using MVars, since at least atomicModifyIORef is guaranteed to leave something in the ioRef, while with MVars you have to watch out for someone killing the thread at an inconvenient time when the MVar is empty. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: The Revenge of Finalizers
Alastair Reid wrote: Alastair: I don't know how to achieve the same goal with atomicModifyIORef. George: I do. To modify ioRef1 and ioRef2 simultaneously, write atomicModifyIORef ioRef1 (\ contents1 - unsafePerformIO ioRef2 (\ contents2 - blah blah)) The actual modification will take place when the result or contents of ioRef1 or ioRef2 get evaluated. I must be missing something because this seems to be riddled with race conditions. In particular, if ioRef1 is updated by a lazy function, then the write to ioRef1 happens but the write to ioRef2 does not. Yes but so what? The write to ioRef2 will happen when you try to evaluate something from the unsafePerformIO. You can do that right away (seq'ing whatever the first atomicModifyIORef returned) if it makes you feel happier. EG (to take an actual example of something I use to implement composable events in Haskell) we can implement a function simpleToggle2 :: IORef Bool - IORef Bool - IO (Just (Bool,Bool)) which attempts to flip the two IORefs from True to False (if they are both True); otherwise returning their actual values. Then you could code this something like this (no I'm not going to check if it passes GHC) simpleToggle2 ioRef1 ioRef2 = do res - atomicModifyIORef ioRef1 (\ contents1 - unsafePerformIO (atomicModifyIORef ioRef2 (\ contents2 - if contents1 contents2 then (False,(False,Nothing)) else (contents2,(contents1,Just (contents1,contents2))) ) ) seq res (return res) Then the first atomicModifyIORef replaces the contents of ioRef1 by a thunk, and returns another thunk. The seq then proceeds to evaluate this thunk, causing the unsafePerformIO to be run, which changes ioRef2. There IS a potential problem, because if you run simpleToggle2 on the same ioRef simpleToggle2 ioRef ioRef or if two threads run simpleToggle2 simultaneously on the same ioRefs but in opposite order simpleToggle2 ioRef1 ioRef2 || simpleToggle2 ioRef2 ioRef1 or in general you have n threads simultaneously running simpleToggle on a cycle of ioRefs, you can get non-termination, because you will find your thunks get linked in a circular chain, with each one depending on the next. This is exactly the same as what happens if you attempt writeIORef x (unsafePerformIO (readIORef x)) and then attempt to evaluate the contents of x. However you also have exactly the same problem when you try to implement simpleToggle2 on MVar Bool, except there you get deadlock instead. The solution in both cases is to put some constraint on the ordering, for example construct some well-founded linear order on ioRefs such that simpleToggle2 ioRef1 ioRef2 is never called unless ioRef2 ioRef1 (I think it's that way round). If you all you have is a data structure containing two IORefs, and the only accesses you want to these IORefs are (a) individual access, with a guaranteed terminating update function; (b) joint access, then the solution is simply to make sure that (say) you always access the first ioRef first. Then I think you have guaranteed termination. This is exactly the same with MVars. So although some Haskell purists (mentioning no names) might object to using unsafePerformIO like this, it does not seem to make matters worse than using MVars, since it's purely a question of taste whether you prefer non-termination or a deadlock. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: The Revenge of Finalizers
Simon Marlow wrote: [snip] Don't you run into a problem even if the two threads use the same ordering? Suppose - thread 1 does the atomicModifyIORef, and gets preempted before doing the seq - thread 2 does its own atomicModifyIORef, and the seq. Thread 2 gets an inconsistent view of the IORefs. No. At the time of the seq in thread 2, the thunk (call it Thunk 2) returned by the second atomicModifyIORef will refer to the thunk (Thunk 1) put in there by the first atomicModifyIORef. The seq will cause the unsafePerformIO from the second atomicModifyIORef to be evaluated. This will come to the if statement which will demand the value of Thunk 1. This will in turn provoke the evaluation of Thunk1. This will cause the action of thread 1's simpleToggle2 to be evaluated to completion. Sadly there does seem to be a problem (with GHC) if Thread 1 is asynchronously interrupted while the unsafePerformIO is being done. I'm afraid there isn't any way of avoiding this I can think of excepting putting a blockAsynchronousExceptions or whatever it's called around the seq. Still, at least that would work, because since atomicModifyIORef does not block, there is no reason for it not to. For the equivalent solution with MVars it does not work, and you need a more complicated arrangement of blocks and unblocks. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: Objections to runAtomically
Alastair Reid wrote: [snip] This is fine for GHC or Hugs or NHC on single processors, but it would be a pain if you had multiple processors. By 'pain' you mean slow? Yes. Since 90% of uses of runAtomically will be with modifyIORef, we can avoid this overhead by providing atomicModifyIORef in the IORef library as well. Multiprocessor GHC is free to implement it more efficiently if necessary/ convenient. [snip] I'm not convinced the overhead is going to be all in modifyIORef. For one thing runAtomically once again assumes the existence of some linear ordering on all state operations performed by the program. I don't have any experience of programming for multiple processors so I don't know what repercussions it would have for the run-time system, but it seems possible that there would be some, even in stateful operations which have nothing to do with runAtomically. It seems to me that the majority of locking in a multi-processor system is going to be one-to-one; one processor wants permission to get at a bit of state owned by another processor. However you are asking for something different, where one processor wants to broadcast a stop-running message to every single other processor, at the same time as perhaps other processors are also trying to do the same thing (so you need some mechanism for arbitration). Of course it will be possible, but since Alastair seems to be happy for the IORef to contain atomicModifyIORef anyway, I don't see a pressing need for the additional primitive. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: The Revenge of Finalizers
Alastair Reid wrote: However in general I think we can hide some of the horribleness from the user: modify2IORefs :: IORef a - IORef b - (a - b - (a,b,c)) - IO c [horrible code deleted] And if they need to update 3 IORefs or a list of IORefs? It would be a fairly trivial matter to generalise the code I posted to such cases. But in my own experience three-way or multi-way synchronisations are only required very rarely. I have yet to come across a case where we needed them here at Bremen, and we do use concurrency an awful lot. Writing code like that yourself and getting it right and portable between compilers seems to be ludicrously hard. I can't tell if that code is right (my gut says no). Well I'm afraid my gut disagrees. Also I think I've adequately addressed all the specific concerns raised so far. Worse though, I don't even know what semantic framework to use to reason about it if we want to be sure the code will work in the presence of strictness analyzers, eager evaluation, parallel evaluation, fully-lazy evaluation, etc. Operational reasoning and reasoning by example struggle with such a task. Well I don't think we have a theoretical semantic framework for reasoning about most of the FFI, including especially unsafePerformIO. However the main assumption that is being made by the code I gave is that the thunks containing the unsafePerformIO's do not get multiply evaluated. I think this is a reasonable assumption to make. I am not scared by any of the buzzwords you give. For example, a strictness analyzer which wrongly attempted to evaluate the unsafePerformIO inside the lock and got into trouble as a result would in my opinion only have itself to blame. The same would apply to eager evaluation. For example, such methods might also get into trouble with the perfectly reasonable code: x - takeMVar y let z = seq x (error Not to be evaluated) putMVar y z seq z since it would presumably raise an error prematurely when the MVar was empty. This is of course without any use of unsafePerformIO; one would expect eager evaluators/strictness evaluators to be especially careful about going round unsafePerformIO, if they did it at all. As for parallel evaluation I just don't see the problem, provided precautions are taken to make sure unsafePerformIO's are not multiply evaluated. I don't know what fully-lazy evaluation is, and how it differs from the normally lazy variety. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: Objections to runAtomically
Alastair Reid wrote: [snip] It assumes that side effects can be linearized. Every theory of concurrency I know of makes the same assumption. [snip] I don't know if the Java Language Specification counts as a theory of concurrency but you will find if you check that side effects are *not* linearised in that language. This is also true of the Java Virtual Machine. Thus it would be a problem if, say, someone were to compile Haskell to JVM. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: The Revenge of Finalizers
Simon Marlow wrote: [snip] However, I think we're trying to solve a problem that doesn't exist yet. All the libraries we have which are affected can be fixed by using atomicModifyIORef, and even if one were to arrive which can't be fixed in this way, the chances that someone would also want to use it from a finalizer are rather low. I think a more pressing problem is whether Haskell finalizers can be implemented in Hugs properly - any thoughts on the problem I raised earlier? [snip] Yes, I agree. I really think that now we can consider the problem of whether it is possible to do useful communication between finalizers and the rest of the Haskell world to have been settled in the affirmative. PS. I've implemented atomicModifyIORef in GHC and am testing it now. Yippee. I shall look forward to using it, especially if it is faster than the equivalent construction using modifyMVar. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: Objections to runAtomically
George Russell wrote: Alastair Reid wrote: [snip] It assumes that side effects can be linearized. Every theory of concurrency I know of makes the same assumption. [snip] I don't know if the Java Language Specification counts as a theory of concurrency but you will find if you check that side effects are *not* linearised in that language. This is also true of the Java Virtual Machine. Thus it would be a problem if, say, someone were to compile Haskell to JVM. Having said that, of course JVM's do permit global locking, so you could implement runAtomically. The effect would be that while you wouldn't have a single global ordering of side effects, runAtomically would appear to work in any of the linear orderings as observed by the separate threads (or something like that, I'm not digging up the fiendishly complicated chapter 7 or 8 or whatever it is of the JVM specification at this time of night). I suppose I have to concede that runAtomically is at least implementable. I still don't think it belongs in the FFI specification though; as Simon M says we seem to be able to do everything anyone can think of using atomicModifyIORef. It's not that I don't think locking primitives are a good idea, but runAtomically seems to be an awfully crude one. It also incidentally would trash any functions which implicitly rely on, say, worker threads. For example, some optimisation algorithm could repeatedly try to improve on the best possible solution until interrupted by an Exception from a worker thread indicating that time was up. Of course you couldn't write such a function in NHC, but in GHC such a technique is potentially useful, and you wouldn't necessarily want to have to tell the user that the function could not be called from inside runAtomically. Could we not at least replace the global lock by a local one? For example newLock :: IO Lock synchronize :: Lock - IO a - IO a would specify a Java-like mechanism, by which synchronize would delay the action if necessary so that no two actions were simultaneously performed on the same lock. The big question would be, can this be implemented with Haskell finalizers in NHC? I think for NHC you would simply implement Alastair's runAtomically, and wrap synchronize in it. Then you would know that if in the middle of synchronize lock action, there was another attempt to synchronize on lock, the lock could not be due to a finalizer running inside action (since new finalizers are blocked and finalizers are properly nested), so must be due to action itself, so we must deadlock. Thus for NHC this is mildly more complicated (and inefficient) than having a global runAtomically. For Hugs/GHC we can easily implement a Lock as an MVar (). The only disadvantage I can see is that for NHC you are delaying new finalizers indefinitely during the duration of the synchronize'd action, but of course this is equally true of runAtomically in NHC. Where this solution wins is that you don't have to delay anything in Hugs or GHC except where the user explicitly requests it. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: The Revenge of Finalizers
(message referred to follows) Alastair suggested implementing blockFinalizers rather than PVars. However I dislike this for two reason: (1) I'm rather attached to PVars. Not just because I suggested them (actually I think I stole them from Einar Karlsen) but because it looks to me as if they could be implemented very efficiently and would be quite useful. If we take the more general interface: newPVar :: a - IO (PVar a) updatePVar :: PVar a - (a - (a,b)) - IO b then PVar's are absolutely guaranteed not to block, and updatePVar can be implemented as an atomic operation in any of GHC, NHC and Hugs, and I suspect it should even be pretty easy to implement it directly in C (since all you are doing is moving pointers to thunks around). In particular I would like to petition Simon Marlow to include updatePVar in this way in GHC, because (unlike the corresponding solution with MVars) it could very cheaply be guaranteed to work atomically (with MVars you need complicated patterns of expensive block-exception primitives to stop asynchronous exceptions mucking things up). Furthermore although you cannot provide updatePVarIO :: PVar a - (a - IO (a,b)) - IO b you can implement something fairly similar to it if you are naughty by giving updatePVar a function which returns an unsafePerformIO'd action. The action will then of course be executed at some outspecified date when the user tries to read the contents of the PVar. For UniForM at least there is an application for this which would plug an existing embarassing hole in the events code, namely that you mustn't throw asynchronous exceptions at threads which use events. (2) blockFinalizers looks fine for Hugs and NHC which only have a single-thread model, but it looks tricky in general where we do not have a conception of during. Effectively that means an implementation on a parallel architecture which accesses state has to come up with some arbitrary order of state accesses, just so that it can rely on it to specify blockFinalizers. Of course PVars assume an ordering of state accesses, but only between accesses to the single PVar. (There is a similar problem lurking with unsafe external calls, which is probably why they are called unsafe.) (3) The implementation of PVars Alastair gives using blockFinalizers also will not work in general unless you also specify that Haskell finalizers are properly nested. This is OK for NHC, maybe not for Hugs, certainly not for GHC. This is not of course a problem for PVars since the reason is that Hugs and GHC have concurrency, and on a concurrent machine you would naturally implement PVars using MVars (if not as primitives in their own right). Alastair Reid wrote: However even if Haskell finalizers + MVars are impossible in NHC, I don't think Haskell finalizers + mutable state have to be. For example another mutable variable we could have would be a PVar which is always full and has functions [snip] updatePVar (PVar ioRef) updateFn = do [stop any new finalizers running] a - readIORef ioRef writeIORef (updateFn a) [reenable finalizers] return a Just for the record, I think that if we were to pursue this approach, then the right primitive to add is: -- | -- Execute argument atomically with respect to finalizers. -- Nested calls to blockFinalizers are allowed. -- -- That is, while executing the argument, no finalizers will start -- to execute. -- (Finalizers that are already executing may continue to execute.) blockFinalizers :: IO a - IO a ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: Finalizers Ride Again
Alastair Reid wrote: [snip] I don't see that it's necessary for us to come to a decision right now about PVars unless we want to put them in the FFI standard. But what if we can't agree on something like PVars or we decide that Haskell finalizers plus yet another synchronization mechanism is worse than C finalizers? Well the first problem (not agreeing on something) is a problem for the Mutable State Standardisation committee (not us). As for Haskell finalizers plus yet another synchronization mechanism, PVar's are not yet another synchronization mechanism because the whole point is that NHC at least has no synchronization mechanism, and if we are going to have a standard synchronization mechanism (which as I've argued is a good idea) something like PVars are needed, if we are to take it that P/V/MVars are all impossible. Even if there were no FFI standard, I think PVars or something like them would be a good idea, otherwise it would be impossible to write functions like Alastair's newObject/killObject so that they could be ported between NHC and other Haskell systems. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Why I want Haskell finalizers
The document which Simon and Alastair have kindly put together asks for more details about why I want foreign garbage collectors to be able to call back to Haskell. Well I can't supply many details, because this is not immediately required by the UniForM/MMiSS project we are working on at Bremen. However the problem is a fairly simple and general one. If you have Haskell talking to some other language with a garbage collector, be it Java or SML, then at a given point of the program you can in general expect to have Haskell holding stable pointers to objects referenced from their world by the other language, and vice-versa. The most obvious way of garbage-collecting these is for each language to reference the foreign objects using its own version of ForeignPtr's, which then instruct the other's RTS that the corresponding stable pointer is no longer required. This mechanism does not handle circular pointers (Haskell references an ML object, which in turn references the original Haskell object again) but it is at least a start. Furthermore as a programmer I think my response to circular dependencies would be to try to avoid them, for example by breaking cycles by making some of the links weak, since a general solution would be fairly complex. I think in any case a general solution would very probably use finalizers to handle finalizers, while doing some complex pointer-chasing to detect and break cycles. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Finalizers compromise
I like what Simon and Alastair have written in the finalizers document. However I don't think there is any proposal I can wholeheartedly support, so I have to produce my own, which is a mixture of almost all the proposals (1)-(5). I hope though that this proposal will at least provide a framework which everyone can accept. The basic idea is that we accept that we cannot agree upon a final version of the FFI standard at this time, so we (a) make an interim change to the standard; (b) experiment with Haskell finalizers; (c) agree a method for settling discussion in a few months time. If anyone else has any better way of putting this discussion out of its misery, please let's hear it. Here is the proposal in detail: (1) We change the names of newForeignPtr and addForeignPtrFinalizer to newUnsafeForeignPtr and addUnsafeForeignPtrFinalizer. This can be considered frozen and a settled part of the standard; people can start coding with them right away. (2) We also include functions newForeignPtr :: Ptr a - IO () - IO (ForeignPtr a) and addForeignPtrFinalizer :: ForeignPtr a - IO () - IO () However we should make it clear in the standard that (a) implementations only *may* choose to implement these functions, or may only implement them with restrictions; (b) the specification of these functions, or whether they exist at all, should be regarded by users as something that is particularly likely to change before the final cut. We can put this section of the standard in italics to indicate its provisional nature. (3) In the meantime Hugs, GHC, NHC should attempt to implement these functions (using for example Simon's patch) and we see how we get on, for example if there are more insuperable difficulties. (4) Also Hugs, GHC and NHC should implement PVars, with the interface newPVar :: a - IO (PVar a) updatePVar :: PVar a - (a - (a,b)) - IO b and we can see if they adequately address Alastair's concerns about mutable state (unless he already considers them adequately answered). Of course for Hugs and GHC the implementation is completely trivial, and if you can't find a student with 10 minutes to spare to write them in terms of MVars, I will be happy to oblige . . . NB. The question is not here Are PVars the best/only possible implementation of concurrency, just Do they provide a useful and implementable way for Haskell finalizers to communicate. NB2. If Alastair insists on blockFinalizers instead, I don't think we need to argue the point, but I would object to having blockFinalizers in the FFI standard, since blockFinalizers seems to me to make too many assumptions about the nature of the implementation (that state changes by Haskell have some sort of universal ordering for example). I would also object to having PVars in the FFI standard, since there may be a better way of implementing thread-safe common state. (5) Since perhaps we will not have unanimity even if we do all this and come back in a year, perhaps we should all agree now some decision procedure. My suggestion is that we agree now that we will either (a) cast newForeignPtr and addForeignPtrFinalizer out of the standard altogether; or (b) make them compulsory or (c) settle some alternative specification/type for them when at least two of {the NHC team}, {the Hugs team}, {the GHC team} consent. Hopefully this will not take too long, perhaps about 6 months. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: Why I want Haskell finalizers
Alastair Reid wrote: George writes: If you have Haskell talking to some other language with a garbage collector, be it Java or SML, then at a given point of the program you can in general expect to have Haskell holding stable pointers to objects referenced from their world by the other language, and vice-versa. The most obvious way of garbage-collecting these is for each language to reference the foreign objects using its own version of ForeignPtr's, which then instruct the other's RTS that the corresponding stable pointer is no longer required. So what you want is for the Java GC to call hs_freeStablePtr on all the Haskell objects that just died? No, that's only a partial (and indeed very incomplete) solution. It relies on the Java GC knowing that that particular reference to the Haskell StablePtr is the only one that matters, and vice-versa for the Haskell GC. If you want any slightly more complicated logic (such as if Java has multiple references to the same StablePtr, or if you want to detect if this particular freeStablePtr might free a circular data structure) you'll have to write it in C. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: Finalizers strike back
Alastair Reid wrote: [snip] I'm not sure I really understand the problem. The FFI standard (Release Candidate 4, the one I have printed out here) does not define IORefs, and of course Haskell 98 doesn't either. Therefore, although this code is broken, this particular example doesn't matter if all we are considering is code written in Haskell 98 + FFI. If there's no shared mutable Haskell state, writing finalizers in Haskell buys you little - what does a finalizer do other than cleanup shared state? (Note that it doesn't cleanup unshared state - what would be the point?) [snip] What I hear you saying is that the FFI standard should depend on a concurrency standard. I think there was a strong sentiment that we should avoid this. I agree though that it is necessary if we allow Haskell finalizers. [snip] I think we should remember that the FFI standard has to address various audiences (1) those who want to implement portable code in just FFI + Haskell98. This group does not have access to functions for conveniently manipulating mutable state, therefore Alastair's problem with IORefs will not be a problem for them. However Haskell inside finalizers will at any rate not harm them. Furthermore it is incorrect to say that just because there is no Haskell mutable state, there is no reason such people may want it to be possible to call Haskell while a finalizer is running. Consider someone who, say, calls out from Haskell to Java (to do funny graphics, say) and writes a finalizer which calls Java code. At the same time, they also want to do some completely separate pure computation in Haskell, which is made available to Java. Since Java at least does have preemptive concurrency, while it is running at all, it is perfectly possible that Java will call the Haskell computation while the Java finalizer is running. You want a license to make the roof fall in at this point; I don't think you should have it. (2) those who want to implement portable code in FFI + Haskell98 + something else. (I suspect this is the larger group.) To discuss this at all we simply have to speculate about what something else might be. One obvious something else is mutable state. Alastair is assuming that something else is IORefs but not MVars. However I assert this is implausible or at least unfortunate, because for reasons I've explained, it would normally be a good idea for implementations and programmers to provide and use MVars, even if they do not provide and use concurrency, since otherwise you can't write functions like Alastair's newObject and killObject without making unnecessary assumptions about the properties of the implementation. If NHC does not provide MVars, I think it should. I apologise to Malcolm, but in any case I doubt if implementing MVars in a world with only one thread would cost him a great deal of effort. To sum up, it's NOT that I want the FFI standard to depend on concurrency. It's once again that I don't want the FFI standard to depend on the absence of concurrency (as Alastair's IORef code does). ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Finalizers strike back
Alastair wrote [snip] More importantly though, this does nothing at all to guarantee atomicity of Haskell code that manipulates global variables. Consider a data structure consisting of a list of objects. The main thread might add to this list and do searches in the list and the finalizer removes objects from the list: type State = IORef [Object] -- used by main newObject :: Object - IO () newObject o = do os - readIORef state writeIORef state (o:os) -- used by finalizer killObject :: Object - IO () killObject o = do os - readIORef state writeIORef state (filter (/= 0) os) Some possible interleavings of the IO actions in these functions can result in an object not being added to or removed from the object list. [snip] I'm not sure I really understand the problem. The FFI standard (Release Candidate 4, the one I have printed out here) does not define IORefs, and of course Haskell 98 doesn't either. Therefore, although this code is broken, this particular example doesn't matter if all we are considering is code written in Haskell 98 + FFI. However to me this code just looks totally wrong because of course I use GHC, a system with preemptive scheduling, and would regard it as incompetent to use anything other than an MVar here. Even for Hugs I don't like this code, because the functions newObject and killObject will fail in the future should anyone try to port them to a system with pre-emptive concurrency, unless the maintainers are careful to avoid running them concurrently. So as Hugs has MVar's, I think it is bad practice not to use them here, unless you have some very good excuse (like being desperate for performance). You're basically implementing an API which has the hidden condition that the two functions must not be called simultaneously. But in any case it's somewhat woolly just to talk in terms of existing compilers. Supposing we posit that there is a GlobalVariables standard. Then indeed Alastair's example would fall over. So we can say that here is a problem which occurs with Haskell98 + FFI + GlobalVariables. However I would expect a GlobalVariables standard to specify both MVars and IORefs. The reason is precisely because of this sort of thing. Of course, as Alastair says, if all you're interested in is a single-threaded implementation, MVars seems a waste of time since either they don't block at all or they deadlock. But they *are* very necessary if you are intending to write code which may in the future be run on an implementation with full concurrency. It's precisely the same reason as why the FFI standard has a threadsafe keyword; although it is these days usually unnecessary, it is nevertheless very important to have it there because of what might happen in the future. So I can sum this up by saying (1) a GlobalVariables standard should specify both MVars and IORefs. (2) Someone writing finalizers on a Haskell98 + FFI + GlobalVariables implementation would have absolutely no excuse for using code such as that Alastair has given, since this is a really blinding example of where MVars are needed. Anyone writing a finalizer should be aware (indeed the FFI standard should say) that the finalizer may be run at any point (after all, when else would you expect it to run) and should take precautions against it. This is much better than expecting them to rely on arcana such as the details of Hugs scheduling strategy. In any case I disagree with Alastair when he says bugs of this sort are as bad as bugs caused by calling Haskell from a finalizer in Hugs. At least in this case Haskell's RTS is still healthy. Assuming the crash comes reasonably often, you can narrow it down by putting in loads of print statements, until eventually you get to the stage of asking Who put what into the State variable. Not very pleasant, but you will get there. However if I understand it correctly, calling Haskell from a finalizer under the FFI may lead to nothing but a core-dump. Furthermore print statements will not help you because the finalizer might have been called anywhere; at the best you will only find out where the GC happened. Even if you guess that the reason Haskell's RTS blew up at just that point was because of a rogue finalizer during which an external Haskell function happened to be called, you have no way in Haskell of finding out what that finalizer might be, even less of finding out what external Haskell function the outside world was trying to invoke. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: FFI digest, Vol 1 #218 - 3 msgs
Alastair Reid wrote: [snip] I maintain that it is better to specify something simple and for GHC to document that it goes beyond the specification just as it does for unboxed types and the like. I don't believe it's at all simple to specify that Haskell-land may not be invoked at any time during the duration of some C function. I'm not even sure I know what duration means, in the presence of concurrency. What's the point in going to all the effort of coming up with a common specification, all of us hacking our implementations to match the spec, endless arguing over details of the syntax, types, libraries, etc. if it doesn't achieve the goal of improving portability? It would have been much easier if we'd left our (incompatible) implementations alone and not made an effort at defining a portable ffi spec. People who only call C in a fairly simple way can still port their FFI code between Hugs, GHC and NHC. We all know that the current mechanism is a huge improvement on the old non-portable ones. However anyone who goes beyond this needs to be aware of elephant traps, and we should not expect them to read the specification in detail to do so. Since I regard this caveat as an extremely important one (for example, it severely limits the use of the FFI to link to languages like Java, which also have their own GC) I just want to note that I believe what you really need is a bunch of entrypoints into the runtime system not the ability to call Haskell code. You also need entrypoints into (say) the Java runtime system, if you are going to interact with Java. Otherwise how are you going to tell Java that one of your Haskell-provided callbacks must be delayed until Hugs has finished its garbage-collection? Or are you proposing that anyone linking Hugs with Java must implement a lock (in Java) which blocks every single callback to Hugs during Hugs GC's? it needs to be stated very clearly in the documentation of those implementations which have it, rather than being left as an embarassing hole which the user will only discover after a long and painful analysis of the core-dumps. Note that the problem is _exactly_ the same problem faced when using foreign functions which were declared using the 'unsafe' calling convention. The only difference I can see is that unsafe calls are much more common so they are more of a problem. The difference is that the unsafe calling convention uses the word unsafe. People who use features which are labelled unsafe may be assumed to be aware of the elephant traps in the specification. If they aren't it's their own fault. The reason most calls at the moment are unsafe is because the FFI has mostly been used to link directly to C code in a fairly single-threaded way. I'm not sure even this is a good idea, since we can't go on assuming a single native thread for much longer. [snip] ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Finalizers etcetera
In fairness I should state that here at Bremen we do not currently need Haskell finalizers. I was thinking of another application altogether, but I don't particularly want to go into details of that. There are however particular reasons (such as galloping concurrency) why I think Alastair's proposed hooks would be impractical in that case, even if they were provided by both garbage collectors, which they wouldn't be. I don't think we are coming any nearer to a solution on this. There seem to be three proposals: (1) Alastair's: keep the signature as it is, with the restriction. Add miscellaneous hooks allowing C to do things to the Haskell world without actually entering it. (2) Mine: keep the signature as it is, but remove the restriction, at least so that if implementations decide to impose the restriction anyway, they have to document it very clearly. (EG Hugs implements the FFI standard [ref] with the restriction that finalizers may not call back into Haskell (see paragraph [whatever]). (3) Manuel's: reverse the change altogether. Ideally I'd like Manuel's solution, except that we obviously can't force Hugs and NHC to comply with it. I think therefore I prefer my solution in that it makes clear you can write simple portable FFI code which runs on any implementation with or without the restriction, but non-trivial use involving interaction with foreign garbage collectors and the like is likely to fall over. I don't deny that it is likely to be possible to write non-trivial applications involving finalizers in several languages even in Hugs, but you will have to provide various hooks which are in general quite inappropriate. Other choices might be some kind of lock which could be operated from both the Haskell and C world (since then it would be possible, painfully, for a C finalizer to arrange for a Haskell thread sleeping on the lock to wake up, if enough scheduling points arise). Or as Alastair suggested, you can at least arrange for some Haskell functions, like freeStablePtr, to be made available in the C world. Such things might be a good idea in Hugs or NHC (obviously it's not for me to say), if Haskell finalizers really are impossible. But in general I think it is simply absolutely inappropriate for such solutions to be specified in the standard, since they seem to assume too much and impose too much work on the user, if the user is happy to use GHC. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: FFI digest, Vol 1 #218 - 3 msgs
Manuel M T Chakravarty wrote: George Russell [EMAIL PROTECTED] wrote, Simon Marlow wrote PS. I'm sorry to keep banging on about this. Ultimately it doesn't really matter to me that much, since I only really use mallocForeignPtr. I guess I was just intrigued to see if the problem was really as difficult as we'd thought. [snip] I'm glad you are banging on about it. But since it looks unlikely that there is going to be agreement on the matter, I suggest the following compromise. We remove the restriction that finalizers may not call Haskell functions in the main body of the standard, but add a note that says that implementations may opt to impose such a restriction, provided they document it in some standard way. That's as good as not defining it at all. I agree that it's extremely unsatisfactory, but it seems the best option (to me) of defining it is not going to be accepted. So at least it would be better if GHC's documentation said We implement the FFI while Hugs and NHC's said We implement the FFI with the caveat that finalizers may not call back into Haskell, as specified in section [blah]. Since I regard this caveat as an extremely important one (for example, it severely limits the use of the FFI to link to languages like Java, which also have their own GC) it needs to be stated very clearly in the documentation of those implementations which have it, rather than being left as an embarassing hole which the user will only discover after a long and painful analysis of the core-dumps. And given that the type signatures of the two versions are different, it isn't even a matter of imposing a runtime restriction. No, that's not true. I'm happy with the existing signature (addForeignFinalizer specifying a pointer to an external function) since that will often be just what you want. I'm just unhappy with the restriction that the roof may fall in if by any means whatever a Haskell function is called from the outside world during the duration of that foreign function. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: Updates to FFI spec: performGC
Alastair Reid wrote: [snip] Region-based systems have the quite wonderful property that garbage is disposed of promptly - you don't have to wait for the next GC for the memory to be released. Which means that performGC becomes a nullop. [snip] This is not entirely true. Firstly, some region-based systems could use stacks of some sort, which might mean hanging onto something recognised as garbage until it is possible to pop the stack. Secondly, the user may think that something is garbage at the time of performGC, although the compiler is not yet able to prove that it is. I agree with those who regard it as extremely undesirable to rely on the garbage-collection algorithm to carry out important actions (like closing a window or a file), if it matters at all when they happen. I think performGC should be seen more as something like C's register storage attribute, which is a friendly hint to the compiler but nothing more. Manuel's new wording (except for the typo advices instead of advises) expresses this perfectly, so there probably isn't much point in discussing this further. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: Proposed change to ForeignPtr
Alastair Reid wrote: [snip] What you're asking Hugs and NHC to do is: add a function to a list whenever you have a finalizer to run; make sure the interpreter will test that bit whenever it is in a position to perform a context switch. Am I really asking that much? In that paper you wrote you already propose that the Haskell system implement functions which tell C when it enters GC and when it leaves GC. So what it seems you need is for this to set a flag so that (1) When the (Foogle) finalizer runs, it runs normally if this flag is unset (no GC is running, so presumably safe). Otherwise add the action to a queue. (2) When Haskell leaves GC it checks the queue and runs any pending actions, backing up if it's necessary to do so. The flag and queue need to be atomically accessed, but since (for Hugs and NHC) you are assuming only one OS thread, that should be enough. You don't for example have to worry about exported functions being called absolutely everywhere. [snip] The cost is going over all data structures in the system making sure that operations on them are suitably atomic. [snip] If something like that would work, the cost during normal operation would only be the cost of setting and clearing the flag at the start and end of GC, and checking the queue at the end of GHC. I'm sorry, it's frightfully arrogant of me to argue over details of Haskell compilers with their implementors, but what else can I do here? Anyway the point is a general one; can we implement FFI without needing the whole machinery of concurrency? If it's really impossible for NHC or Hugs to implement this, I think I would still rather it was left to the NHC and Hugs documentation to admit that exported Haskell functions basically don't work in some circumstances, rather than to the GHC documentation to say that actually they do. It's a matter of taste how you do these things. If we take it that there is no way for a finalizer to call functions exported from NHC or Hugs, my personal preference would be to regard this as an undesirable implementation glitch which should be documented in the NHC and Hugs documentation. This would then be something users of the FFI with NHC and Hugs have to worry about, rather than users of the FFI in general. I don't think it makes the FFI worthless for NHC and Hugs, any more than the recent discovery that IO is not really a monad on most implementations makes the Haskell standard worthless. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: Proposed change to ForeignPtr
Manuel M T Chakravarty wrote: [snip] BTW, having two languages with separated heaps interact is a big mess as soon as you can have cycles, which you usually cannot exclude. Alastair already pointed that out and Martin Odersky also has nice stories to tell about this. [snip] Yeah yeah I know, indeed I think I pointed it out in the message which started this whole discussion. In the example application I am thinking of, I think cycles can be excluded in the first instance. In general I think there are various rather complicated things one might do in the finalizers to deal with cycles, and frankly I'd rather be able to write the code for this in Haskell rather than in C. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: Updates to FFI spec: performGC
Alastair wrote about performGC (snipped) It'd be nice to say that it has to be a full GC - but I've no idea how to specify that in a non-operational (i.e., implementation dependent) way. I certainly don't think you should constrain implementations to be able to perform a full GC in any sense. It is possible if unlikely that someone will get along to implementing the HaskellKit, where GC is entirely dispensed with and replaced by region analysis. Even if that's not the case, I certainly hope that one of these days someone will get along to implementing a Haskell compiler that does at least some easy region analysis. Also there are probably hard-real-time GC algorithms (like Baker's treadmill) or algorithms which are close to being hard-real-time (like the train algorithm) where doing a full GC would be a major pain. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: Proposed change to ForeignPtr
Manuel wrote (snipped) I have changed this in the spec now. I attach the wording used in the spec. \item[newForeignPtr ::\ Ptr a - FunPtr (Ptr a - IO ()) - IO (ForeignPtr a)] Turn a plain memory reference into a foreign object by associating a finalizer with the reference. The finalizer is represented by a pointer to an external function, which will be executed after the last reference to the foreign object is dropped. On invocation, the finalizer receives a pointer to the associated foreign object as an argument. Note that there is no guarantee on how soon the finalizer is executed after the last reference was dropped; this depends on the details of the Haskell storage manager. The only guarantee is that the finalizer runs before the program terminates. Whether a finaliser may call back into the Haskell system is system dependent. Portable code may not rely on such call backs. \item[addForeignPtrFinalizer ::\ ForeignPtr a - FunPtr (Ptr a - IO ()) - IO ()] Add another finalizer to the given foreign object. No guarantees are made on the order in which multiple finalizers for a single object are run. I think this is all a rather murky area. Consider two systems, let's call them Haskell and Foogle, which both operate heaps and do their own storage allocation, but also communicate over similar FFIs. We might very reasonably have situations where fairly complex inter-language pointers exist, so for example Haskell holds a ForeignPtr to something in the Foogle heap; the pointed-to Foogle object in turn references a Haskell object (presumably provided via StablePtr). Now suppose Haskell wants to drop the ForeignPtr. Then the logical thing for the finalizer to do is to tell Foogle that Haskell is no longer interested in the Foogle object. This then gives Foogle the chance on its own garbage collection to in turn drop the Haskell StablePtr. In turn this means somehow running StablePtr.freeStablePtr. However this scheme I don't know if that's legal, because the Haskell finalizer you need to run freeStablePtr is indirectly provoked by the initial Haskell finalizer. This is a pity, because this might actually be a fairly good way of managing garbage collection between Foogle and Haskell. Of course you would need at least reference counters (if you can guarantee there are no cycles containing both languages) or something more powerful otherwise, but reference counters at least can be provided. Furthermore I do actually have a real case in mind where I might use something like this, though I'd rather not go into details at this time. I'm afraid I haven't been following this thread lately, so I don't know what the big problem is about calling Haskell from a finalizer; I suppose it's something to do with us being in the middle of garbage collection. However wouldn't it be better to allow finalizers to somehow provide an action which may call Haskell, but (a) may be performed at some later date (such as when the GC is over); (b) consequently, may not assume that the pointer finalized still points to anything? ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: Proposed change to ForeignPtr
Malcolm Wallace wrote [snip] I don't see the problem. The Foogle garbage collector runs separately and asynchronously to the Haskell GC. A Foogle object is released by the Haskell collector, then at a later moment in time, a Haskell object is released by the Foogle collector. Where is the conflict? [snip] Does at a later moment in time mean that it is late enough that we can be sure calling Haskell will be OK? Look, suppose for simplicity that Foogle implements an identical FFI to Haskell. So we have (Haskell ForeignPtr A) == (Foogle StablePtr A) (Foogle StablePtr A) points to (Foogle ForeignPtr B) (Foogle ForeignPtr B) == (Haskell StablePtr B) Haskell frees (Haskell ForeignPtr A) thereby causing the finalizer. This presumably does the Foogle equivalent of freeStablePtr on A. This *may* trigger an immediate Foogle garbage collection (I think Foogle's RTS is within its rights if it does), so that Foogle now wants to finalize Foogle ForeignPtr B. The Foogle finalizer action for foreignPtr B involves calling back to Haskell so that (Haskell StablePtr B) can have freeStablePtr applied to it. Now is the FFI specification going to guarantee that however quickly the Foogle garbage collector executes the Haskell finalizer, things will work. It seems to me the wording suggested does not guarantee this. All it says that the finaliser cannot portably call back into the Haskell system. But when the finaliser provokes an immediate GC which calls back into the Haskell system? I'm altogether rather puzzled by this notion of FunPtr's which are allowed to call Haskell back at some times and not others. Nor do I understand how it's supposed to work, say, on truly parallel Haskell implementations. [snip] Why do you suggest a need for reference counts? In the absence of cycles, surely the existing two garbage collectors (howsoever implemented) are sufficient. [snip] Garbage collectors need roots. If I understand the situation correctly, a StablePtr is itself a root, until explicitly freed. If you are handing out the same object via the same StablePtr to several different Foogle things, it seems to me you might attach a reference counter to the StablePtr, so that when Foogle says I am no longer interested in this (Haskell) StablePtr, you decrement the reference counter by 1, and deallocate if it reaches zero. Alternatively of course you could create a fresh StablePtr every time something is passed to Foogle. I think reference counters are sometimes used in things like CORBA, where a similar problem arises. Of course they cannot cope with interlanguage cycles. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: Proposed change to ForeignPtr
Alastair Reid wrote [snip] We should provide a C function hs_freeStablePtr and explicitly say that it is safe to call this from inside a finalizer. [snip] This would be the simplest solution, but would not permit you to do anything more sophisticated at the Haskell side, such as reference counting or more complicated strategies to check for possible cycles. You might end up having to do all your inter-language GC code in C, ugh. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: Proposed change to ForeignPtr
Malcolm Wallace wrote [snip] Quite simply, no finaliser (whether in Foogle or Haskell) should be capable of triggering a garbage collection within its call. This condition is absolutely necessary to prevent a cascade effect of cross-language garbage collections, where a finaliser in Haskell could trigger a GC in Foogle which triggers another (nested) GC in Haskell etc. [snip] Unfortunately some sort of cascade is exactly what we want and need when the Haskell finaliser indicates that Haskell is no longer interested in some Foogle object, which means Foogle can run a GC which indicates Foogle is no longer interested in some Haskell object and so on . . . Thus, if Haskell.freeSomething calls Foogle.freeSomething, and Foogle.freeSomething cannot cause a Foogle GC, then no Foogle finalisers are run yet, and so Foogle *cannot* call the Haskell world until the Haskell GC is complete. After that, it doesn't matter when the Foogle finaliser decides to run. [snip] But surely Foogle has no way of knowing when the Haskell GC is over? Suppose Haskell does [enter GC] ... [run finaliser 1] ... [run finaliser 2] ... [leave GC] Then you want Foogle to delay any Haskell calls consequent on finaliser 1 until [leave GC], don't you? How can it? Is it really so difficult to create some queue of delayed functions which can be appended to from C and which nhc checks every time it does leave GHC? ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: HTML Please
I much prefer HTML to PDF, LaTeX or PS. I can set the font size to be comfortable for my eyes while still leaving space on my screen for an XEmacs window. Also you can do text searches of HTML. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi
Re: References to values and weak references.
Simon Marlow wrote: [snip] (1) the ability for FFI to give me access to an external value as well as an external function, as in foreign import "default_arguments" defaultArguments :: Addr We have this. It's called "foreign label", but it's in an obscure part of the documentation - you're not the first person to miss it :) Oh alright, found it, thanks! Do you know how to use existing support for weak references in the linker/assembler? I believe GNU as/ld has them, but I can't find any reference in the docs. I've never used weak references, but I would wager that any of the systems ghc works on supports them. I suppose there are two problems: (1) You need to get weak references into the object files. (2) You need to get the linker to do something sensible with them. For example, for SunOS 5.7 "man ld" yields the following option: -z allextract | defaultextract | weakextract which adjusts ld for different policies for linking weak references. ___ FFI mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/ffi