Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-05-01 Thread Bas van Dijk
I created a ticket about the asynchronous exception wormholes so
that we won't forget about them:

http://hackage.haskell.org/trac/ghc/ticket/4035

regards,

Bas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-05-01 Thread Simon Marlow

On 01/05/10 16:17, Bas van Dijk wrote:

I created a ticket about the asynchronous exception wormholes so
that we won't forget about them:

http://hackage.haskell.org/trac/ghc/ticket/4035


Thanks - don't worry, I haven't forgotten, just been busy with other things.

Cheers,
Simon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-22 Thread Simon Marlow

On 21/04/2010 19:38, Bas van Dijk wrote:

On Tue, Apr 20, 2010 at 12:56 PM, Simon Marlowmarlo...@gmail.com  wrote:

On 09/04/2010 12:14, Bertram Felgenhauer wrote:

It could be baked into a variant of the forkIO primitive, say

 forkIOwithUnblock :: ((IO a -IO a) -IO b) -IO ThreadId


I agree with the argument here.  However, forkIOWithUnblock reintroduces the
wormhole, which is bad.

The existing System.Timeout.timeout does it the other way around: the forked
thread sleeps and then sends an exception to the main thread. This version
work if exceptions are masked, regardless of whether we have
forkIOWithUnblock.

Arguably the fact that System.Timeout.timeout uses an exception is a visible
part of its implementation: the caller must be prepared for this, so it is
not unreasonable for the caller to also ensure that exceptions are unmasked.
  But it does mean that a library cannot use System.Timeout.timeout invisibly
as part of its implementation.  If we had forkIOWithUnblock that would solve
this case too, as the library code can use a private thread in which
exceptions are unmasked.  This is quite a nice solution too, since a private
ThreadId is not visible to anyone else and hence cannot be the target of any
unexpected exceptions.

So I think I'm convinced that forkIOWithUnblock is necessary.  It's a shame
that it can be misused, but I don't see a way to avoid that.

Cheers,
Simon



I can see how forkIOWithUnblock (or forkIOWithUnnmask) can introduce a wormhole:

unmaskHack1 :: IO a -  IO a
unmaskHack1 m = do
   mv- newEmptyMVar
   tid- forkIOWithUnmask $ \unmask -  putMVar mv unmask
   unmask- takeMVar mv
   unmask m

We can try to solve it using a trick similar to the ST monad:


Funnily enough, before posting the above message I followed exactly the 
line of reasoning you detail below to discover that there isn't a way to 
fix this using parametricity.  It's useful to have it documented, though 
- thanks.


Cheers,
Simon




{-# LANGUAGE Rank2Types #-}

import qualified Control.Exception as Internal (unblock)
import Control.Concurrent (forkIO, ThreadId)
import Control.Concurrent.MVar (newEmptyMVar, putMVar, takeMVar)

newtype Unmask s = Unmask (forall a. IO a -  IO a)

forkIOWithUnmask :: (forall s. Unmask s -  IO ()) -  IO ThreadId
forkIOWithUnmask f = forkIO $ f $ Unmask Internal.unblock

apply :: Unmask s -  IO a -  IO a
apply (Unmask f) m = f m

thisShouldWork = forkIOWithUnmask $ \unmask -  apply unmask (return ())

The following shouldn't work and doesn't because we get the following
type error:

Inferred type is less polymorphic than expected. Quantified type
variable `s' is mentioned in the environment.

unmaskHack2 :: IO a -  IO a
unmaskHack2 m = do
   mv- newEmptyMVar
   tid- forkIOWithUnmask $ \unmask -  putMVar mv unmask
   unmask- takeMVar mv
   apply unmask m

However we can still hack the system by not returning the 'Unmask s'
but returning the IO computation 'apply unmask m' as in:

unmaskHack3 :: IO a -  IO a
unmaskHack3 m = do
   mv- newEmptyMVar
   tid- forkIOWithUnmask $ \unmask -  putMVar mv (apply unmask m)
   unmaskedM- takeMVar mv
   unmaskedM -- (or use join)

AFAIK the only way to solve the latter is to also parametrize IO with s:

data IO s a = ...

newtype Unmask s = Unmask (forall s2 a. IO s2 a -  IO s2 a)

forkIOWithUnmask :: (forall s. Unmask s -  IO s ()) -  IO s2 ThreadId
forkIOWithUnmask f = forkIO $ f $ Unmask Internal.unblock

apply :: Unmask s -  IO s2 a -  IO s a
apply (Unmask f) m = f m

With this unmaskHack3 will give the desired type error.

Of course parameterizing IO with s is a radical change that will break
_a lot of_ code. However besides solving the latter problem the extra
s in IO also create new opportunities. Because all the advantages of
ST can now also be applied to IO. For example we can have:

scope :: (forall s. IO s a) -  IO s2 a

data LocalIORef s a

newLocalIORef :: a -  IO s (LocalIORef s a)
readLocalIORef :: LocalIORef s a -  IO s a
writeLocalIORef :: LocalIORef s a -  a -  IO s a

regards,

Bas


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-22 Thread Bas van Dijk
On Thu, Apr 22, 2010 at 10:30 AM, Simon Marlow marlo...@gmail.com wrote:
 Funnily enough, before posting the above message I followed exactly the line
 of reasoning you detail below to discover that there isn't a way to fix this
 using parametricity.  It's useful to have it documented, though - thanks.

In their paper on Lightweight Monadic Regions [1] Oleg Kiseljov and
Chung-chieh Shan had to solve a very similar problem.

To quote the middle of section 4.3:

...Instead of leaking a handle, we may try to leak a computation
involving the handle:

do ac - newRgn (do
  h2 - newSHandle fname2 ReadMode
  return (shGetLine h2))
ac

This code is unsafe because newRgn closes the handle h2 when
the child computation exits. Executing the action ac outside of
newRgn would attempt to read from an already closed handle.
Fortunately, this code raises a type error. The handle h2 has the
type SHandle (IORT s m), where s is quantified in the type of
newRgn. The computation shGetLine h2 therefore has the type

(MonadRaise (IORT s m) m2, RMonadIO m2) = m2 String

Since do-bindings are monomorphic, the type checker must resolve
the two constraints above to infer a type for ac. Since m2
is unknown, constraint resolution fails and yields a type error. Indeed,
every way to instantiate m2 that satisfies the suffix constraint
MonadRaise (IORT s m) m2 mentions s, but the type of ac,
which contains m2, cannot mention s because ac takes scope beyond
s. It does not help to existentially quantify over s in the type
of ac, because the type error would just shift to where ac is used...

If we look at:

unmaskHack3 :: IO a - IO a
unmaskHack3 m = do
 mv - newEmptyMVar
 tid - forkIOWithUnmask $ \unmask - putMVar mv (apply unmask m)
 unmaskedM - takeMVar mv
 unmaskedM -- (or use join)

the correspondence is clear:

newRgn= forkIOWithUnnmask
h2= unmask
shGetLine = apply
ac= unmaskedM

I'm hoping we can solve this in a similar way!

regards

Bas

[1] http://okmij.org/ftp/Haskell/regions.html#light-weight
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-21 Thread Bas van Dijk
On Tue, Apr 20, 2010 at 12:56 PM, Simon Marlow marlo...@gmail.com wrote:
 On 09/04/2010 12:14, Bertram Felgenhauer wrote:

 Simon Marlow wrote:

 On 09/04/2010 09:40, Bertram Felgenhauer wrote:

      timeout t io = mask $ \restore -  do
          result- newEmptyMVar
          tid- forkIO $ restore (io= putMVar result)
          threadDelay t `onException` killThread tid
          killThread tid
          tryTakeMVar result

 I'm worried about the case when this function is called with exceptions
 already blocked. Then 'restore' will be the identity, and exceptions
 will continue to be blocked inside the forked thread.

 You could argue that this is the responsibility of the whole chain of
 callers (who'd have to supply their own 'restore' functions that will
 have to be incorporated into the 'io' action), but that goes against
 modularity. In my opinion there's a valid demand for an escape hatch
 out of the blocked exception state for newly forked threads.

 It could be baked into a variant of the forkIO primitive, say

     forkIOwithUnblock :: ((IO a -  IO a) -  IO b) -  IO ThreadId

 I agree with the argument here.  However, forkIOWithUnblock reintroduces the
 wormhole, which is bad.

 The existing System.Timeout.timeout does it the other way around: the forked
 thread sleeps and then sends an exception to the main thread. This version
 work if exceptions are masked, regardless of whether we have
 forkIOWithUnblock.

 Arguably the fact that System.Timeout.timeout uses an exception is a visible
 part of its implementation: the caller must be prepared for this, so it is
 not unreasonable for the caller to also ensure that exceptions are unmasked.
  But it does mean that a library cannot use System.Timeout.timeout invisibly
 as part of its implementation.  If we had forkIOWithUnblock that would solve
 this case too, as the library code can use a private thread in which
 exceptions are unmasked.  This is quite a nice solution too, since a private
 ThreadId is not visible to anyone else and hence cannot be the target of any
 unexpected exceptions.

 So I think I'm convinced that forkIOWithUnblock is necessary.  It's a shame
 that it can be misused, but I don't see a way to avoid that.

 Cheers,
        Simon


I can see how forkIOWithUnblock (or forkIOWithUnnmask) can introduce a wormhole:

unmaskHack1 :: IO a - IO a
unmaskHack1 m = do
  mv - newEmptyMVar
  tid - forkIOWithUnmask $ \unmask - putMVar mv unmask
  unmask - takeMVar mv
  unmask m

We can try to solve it using a trick similar to the ST monad:

{-# LANGUAGE Rank2Types #-}

import qualified Control.Exception as Internal (unblock)
import Control.Concurrent (forkIO, ThreadId)
import Control.Concurrent.MVar (newEmptyMVar, putMVar, takeMVar)

newtype Unmask s = Unmask (forall a. IO a - IO a)

forkIOWithUnmask :: (forall s. Unmask s - IO ()) - IO ThreadId
forkIOWithUnmask f = forkIO $ f $ Unmask Internal.unblock

apply :: Unmask s - IO a - IO a
apply (Unmask f) m = f m

thisShouldWork = forkIOWithUnmask $ \unmask - apply unmask (return ())

The following shouldn't work and doesn't because we get the following
type error:

Inferred type is less polymorphic than expected. Quantified type
variable `s' is mentioned in the environment.

unmaskHack2 :: IO a - IO a
unmaskHack2 m = do
  mv - newEmptyMVar
  tid - forkIOWithUnmask $ \unmask - putMVar mv unmask
  unmask - takeMVar mv
  apply unmask m

However we can still hack the system by not returning the 'Unmask s'
but returning the IO computation 'apply unmask m' as in:

unmaskHack3 :: IO a - IO a
unmaskHack3 m = do
  mv - newEmptyMVar
  tid - forkIOWithUnmask $ \unmask - putMVar mv (apply unmask m)
  unmaskedM - takeMVar mv
  unmaskedM -- (or use join)

AFAIK the only way to solve the latter is to also parametrize IO with s:

data IO s a = ...

newtype Unmask s = Unmask (forall s2 a. IO s2 a - IO s2 a)

forkIOWithUnmask :: (forall s. Unmask s - IO s ()) - IO s2 ThreadId
forkIOWithUnmask f = forkIO $ f $ Unmask Internal.unblock

apply :: Unmask s - IO s2 a - IO s a
apply (Unmask f) m = f m

With this unmaskHack3 will give the desired type error.

Of course parameterizing IO with s is a radical change that will break
_a lot of_ code. However besides solving the latter problem the extra
s in IO also create new opportunities. Because all the advantages of
ST can now also be applied to IO. For example we can have:

scope :: (forall s. IO s a) - IO s2 a

data LocalIORef s a

newLocalIORef :: a - IO s (LocalIORef s a)
readLocalIORef :: LocalIORef s a - IO s a
writeLocalIORef :: LocalIORef s a - a - IO s a

regards,

Bas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-20 Thread Simon Marlow

On 09/04/2010 12:14, Bertram Felgenhauer wrote:

Simon Marlow wrote:

On 09/04/2010 09:40, Bertram Felgenhauer wrote:

  timeout t io = mask $ \restore -  do
  result- newEmptyMVar
  tid- forkIO $ restore (io= putMVar result)
  threadDelay t `onException` killThread tid
  killThread tid
  tryTakeMVar result


I'm worried about the case when this function is called with exceptions
already blocked. Then 'restore' will be the identity, and exceptions
will continue to be blocked inside the forked thread.

You could argue that this is the responsibility of the whole chain of
callers (who'd have to supply their own 'restore' functions that will
have to be incorporated into the 'io' action), but that goes against
modularity. In my opinion there's a valid demand for an escape hatch
out of the blocked exception state for newly forked threads.

It could be baked into a variant of the forkIO primitive, say

 forkIOwithUnblock :: ((IO a -  IO a) -  IO b) -  IO ThreadId


I agree with the argument here.  However, forkIOWithUnblock reintroduces 
the wormhole, which is bad.


The existing System.Timeout.timeout does it the other way around: the 
forked thread sleeps and then sends an exception to the main thread. 
This version work if exceptions are masked, regardless of whether we 
have forkIOWithUnblock.


Arguably the fact that System.Timeout.timeout uses an exception is a 
visible part of its implementation: the caller must be prepared for 
this, so it is not unreasonable for the caller to also ensure that 
exceptions are unmasked.  But it does mean that a library cannot use 
System.Timeout.timeout invisibly as part of its implementation.  If we 
had forkIOWithUnblock that would solve this case too, as the library 
code can use a private thread in which exceptions are unmasked.  This is 
quite a nice solution too, since a private ThreadId is not visible to 
anyone else and hence cannot be the target of any unexpected exceptions.


So I think I'm convinced that forkIOWithUnblock is necessary.  It's a 
shame that it can be misused, but I don't see a way to avoid that.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-20 Thread Isaac Dupree

On 04/20/10 06:56, Simon Marlow wrote:

On 09/04/2010 12:14, Bertram Felgenhauer wrote:

Simon Marlow wrote:

On 09/04/2010 09:40, Bertram Felgenhauer wrote:

timeout t io = mask $ \restore - do
result- newEmptyMVar
tid- forkIO $ restore (io= putMVar result)
threadDelay t `onException` killThread tid
killThread tid
tryTakeMVar result


I'm worried about the case when this function is called with exceptions
already blocked. Then 'restore' will be the identity, and exceptions
will continue to be blocked inside the forked thread.

You could argue that this is the responsibility of the whole chain of
callers (who'd have to supply their own 'restore' functions that will
have to be incorporated into the 'io' action), but that goes against
modularity. In my opinion there's a valid demand for an escape hatch
out of the blocked exception state for newly forked threads.

It could be baked into a variant of the forkIO primitive, say

forkIOwithUnblock :: ((IO a - IO a) - IO b) - IO ThreadId


I agree with the argument here. However, forkIOWithUnblock reintroduces
the wormhole, which is bad.

The existing System.Timeout.timeout does it the other way around: the
forked thread sleeps and then sends an exception to the main thread.
This version work if exceptions are masked, regardless of whether we
have forkIOWithUnblock.

Arguably the fact that System.Timeout.timeout uses an exception is a
visible part of its implementation: the caller must be prepared for
this, so it is not unreasonable for the caller to also ensure that
exceptions are unmasked. But it does mean that a library cannot use
System.Timeout.timeout invisibly as part of its implementation. If we
had forkIOWithUnblock that would solve this case too, as the library
code can use a private thread in which exceptions are unmasked. This is
quite a nice solution too, since a private ThreadId is not visible to
anyone else and hence cannot be the target of any unexpected exceptions.

So I think I'm convinced that forkIOWithUnblock is necessary. It's a
shame that it can be misused, but I don't see a way to avoid that.


[forkIOWithUnblock in the implementation of 'timeout'?] I thought that 
System.Timeout.timeout runs the IO in the original thread for a good 
reason.  Was it just so that it could receive asynchronous exceptions 
correctly? Or also so that myThreadID returned the correct value?  If 
just the former is what we're concerned about, we *could* make it behave 
differently when exceptions are unblocked vs. when it's uninterruptible, 
except that they can be restored to an unblocked state somewhere within 
the io.


[Oh wait, Simon was suggesting that the library should run 
forkIOWithUnblock as a wrapper to its use of 'timeout'.]  Yes, that 
sounds relatively safe.  None of the exceptions thrown to the original 
thread will be discharged unexpectedly as a result of this unblocking, 
because of the forkIO.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-19 Thread Simon Marlow

On 10/04/2010 20:07, Iavor Diatchki wrote:

Hello,
I wonder if it might be possible to use just one primitive which
atomically changes the interrupt mask for a thread?  Here is an example
of what I'm thinking:

data MaskingState   = Unmasked
| MaskedInterruptible
| MaskedNonInterruptible

-- Atomically changes the interrupt mask for a thread, and returns the
old mask.
setMask:: MaskingState - IO MaskingState
setMask = error primitive?

-- Change the mask for the duration of an IO action.
-- The action is passed the old mask.
scopedSetMask  :: MaskingState - (MaskingState - IO a) - IO a
scopedSetMask m io  = do m1 - setMask m
 a - io m1
 setMask m1
 return a

-- Change the mask for the duration of an IO action.
scopedSetMask_ :: MaskingState - IO a - IO a
scopedSetMask_ m io = scopedSetMask m $ \_ -
io
-- Simon's mask:
mask   :: ((IO a - IO a) - IO b) - IO b
mask f  = scopedSetMask MaskedInterruptible $ \m -
f (scopedSetMask_ m)


I could replace 3 of the primitives I have (block, unblock, and 
uninterruptibleBlock) with just one as you suggest, yes.  And we could 
replace the scoping behaviour of the primitives with scopedSetMask, 
although some care would be needed to make sure it wasn't any less 
efficient, we would probably have to explicitly expand the code for 
scopedSetMask into the three possible cases.


However, the current block and unblock primitives have a bit of clever 
logic to keep the stack at a constant size when called recursively (see 
the async exceptions paper for details), we would lose that if we used 
the above formulation.


Note that this isn't the only place that changes the masking state: when 
an exception is raised, we have to temporarily mask exeptions for the 
handler, and then restore them to the prevailing state if the handler 
returns.  The way the primitives are currently defined makes this quite 
easy, because I have ready-made stack frames to use.  We could lift this 
into Haskell by redefining catch:


catch :: IO a - (Exception - IO a) - IO a
catch io handler = mask $ \restore - restore io `realCatch` handler

but this adds more overhead to catch, and the implementation of throw 
still has to restore the masking state from the stack frame for catch.


BTW, you also need a way to check what the current masking state is, 
otherwise the wormhole that was the original problem being discussed 
here reappears. For example, mask should only change the masking state 
to MaskedInterruptible if it is currently Unmasked, otherwise it will 
(a) make the state interruptible if it was uninterruptible, and (b) 
introduce an unnecessary stack frame.


Cheers,
Simon




-Iavor


On Sat, Apr 10, 2010 at 11:42 AM, Iavor Diatchki
iavor.diatc...@gmail.com mailto:iavor.diatc...@gmail.com wrote:
  Hello,
  It seems that rank-2 types are sufficient to make the more
polymorphic types:
 
  
  {-# LANGUAGE Rank2Types #-}
  import Control.Exception
 
  data Mask = Mask (forall a. IO a - IO a)
 
  mask :: (Mask - IO a) - IO a
  mask io = do
   b - blocked
   if b
 then io (Mask id)
 else block $ io (Mask unblock)
 
  restore :: Mask - IO a - IO a
  restore (Mask f) a = f a
  --
 
  This is useful in an example like this:
 
  forkThen :: IO () - IO a - IO a
  forkThen io k = mask $ \m -
   do tid - forkIO (restore m io)
  restore m k `catch` \e -
do when (e == ThreadKilled) (killThread tid)
   throwIO e
 
  -Iavor
 
 
  On Thu, Apr 8, 2010 at 1:23 AM, Simon Marlow marlo...@gmail.com
mailto:marlo...@gmail.com wrote:
  On 07/04/2010 18:54, Isaac Dupree wrote:
 
  On 04/07/10 11:12, Simon Marlow wrote:
 
  It's possible to mis-use the API, e.g.
 
  getUnmask = mask return
 
  ...incidentally,
  unmask a = mask (\restore - return restore) = (\restore -
restore a)
 
  That doesn't work, as in it can't be used to unmask exceptions when
they are
  masked.  The 'restore' you get just restores the state to its
current, i.e.
  masked, state.
 
  mask :: ((IO a - IO a) - IO b) - IO b
 
  It needs to be :: ((forall a. IO a - IO a) - IO b) - IO b
  so that you can use 'restore' on two different pieces of IO if you need
  to. (alas, this requires not just Rank2Types but RankNTypes. Also, it
  doesn't cure the loophole. But I think it's still essential.)
 
  Sigh, yes I suppose that's true, but I've never encountered a case
where I
  needed to call unmask more than once, let alone at different types,
within
  the scope of a mask.  Anyone else?
 
  Cheers,
 Simon
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org mailto:Haskell-Cafe@haskell.org
  

Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-19 Thread Simon Marlow

On 10/04/2010 19:42, Iavor Diatchki wrote:

Hello,
It seems that rank-2 types are sufficient to make the more polymorphic types:


{-# LANGUAGE Rank2Types #-}
import Control.Exception

data Mask = Mask (forall a. IO a -  IO a)

mask :: (Mask -  IO a) -  IO a
mask io = do
  b- blocked
  if b
 then io (Mask id)
 else block $ io (Mask unblock)

restore :: Mask -  IO a -  IO a
restore (Mask f) a = f a
--


If you're going to do that, you could even get rid of the Rank 2 type 
completely:


data Mask = RestoreUnmask | RestoreMaskInterruptible | ..

restore RestoreUnmask a = unblock a
restore RestoreMaskInterruptible a = block a
...

at the expense of a little run-time tag testing.  But that's up to the 
implementation of course; the Mask type can be abstract.


So I think I like this variant, even though it adds a little API 
overhead.  Anyone else have any thoughts on this?


Cheers,
Simon



This is useful in an example like this:

forkThen :: IO () -  IO a -  IO a
forkThen io k = mask $ \m -
   do tid- forkIO (restore m io)
  restore m k `catch` \e -
do when (e == ThreadKilled) (killThread tid)
   throwIO e

-Iavor


On Thu, Apr 8, 2010 at 1:23 AM, Simon Marlowmarlo...@gmail.com  wrote:

On 07/04/2010 18:54, Isaac Dupree wrote:


On 04/07/10 11:12, Simon Marlow wrote:


It's possible to mis-use the API, e.g.

getUnmask = mask return


...incidentally,
unmask a = mask (\restore -  return restore)= (\restore -  restore a)


That doesn't work, as in it can't be used to unmask exceptions when they are
masked.  The 'restore' you get just restores the state to its current, i.e.
masked, state.


mask :: ((IO a -  IO a) -  IO b) -  IO b


It needs to be :: ((forall a. IO a -  IO a) -  IO b) -  IO b
so that you can use 'restore' on two different pieces of IO if you need
to. (alas, this requires not just Rank2Types but RankNTypes. Also, it
doesn't cure the loophole. But I think it's still essential.)


Sigh, yes I suppose that's true, but I've never encountered a case where I
needed to call unmask more than once, let alone at different types, within
the scope of a mask.  Anyone else?

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-19 Thread Bas van Dijk
On Mon, Apr 19, 2010 at 5:54 PM, Simon Marlow marlo...@gmail.com wrote:
 So I think I like this variant, even though it adds a little API overhead.
  Anyone else have any thoughts on this?

I do think the RankNTypes version:
mask :: ((forall b. IO b - IO b) - IO a) - IO a
is easier to use and explain because it doesn't require the extra
'restore' function.

What are the problems with RankNTypes?

I can imagine that one problem is not being haskell98.
However the Control.Exception module is also not haskell98 due to the
existentially quantified SomeException constructor:
data SomeException = forall e . Exception e = SomeException e

regards,

Bas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-10 Thread Iavor Diatchki
Hello,
It seems that rank-2 types are sufficient to make the more polymorphic types:


{-# LANGUAGE Rank2Types #-}
import Control.Exception

data Mask = Mask (forall a. IO a - IO a)

mask :: (Mask - IO a) - IO a
mask io = do
 b - blocked
 if b
then io (Mask id)
else block $ io (Mask unblock)

restore :: Mask - IO a - IO a
restore (Mask f) a = f a
--

This is useful in an example like this:

forkThen :: IO () - IO a - IO a
forkThen io k = mask $ \m -
  do tid - forkIO (restore m io)
 restore m k `catch` \e -
   do when (e == ThreadKilled) (killThread tid)
  throwIO e

-Iavor


On Thu, Apr 8, 2010 at 1:23 AM, Simon Marlow marlo...@gmail.com wrote:
 On 07/04/2010 18:54, Isaac Dupree wrote:

 On 04/07/10 11:12, Simon Marlow wrote:

 It's possible to mis-use the API, e.g.

 getUnmask = mask return

 ...incidentally,
 unmask a = mask (\restore - return restore) = (\restore - restore a)

 That doesn't work, as in it can't be used to unmask exceptions when they are
 masked.  The 'restore' you get just restores the state to its current, i.e.
 masked, state.

 mask :: ((IO a - IO a) - IO b) - IO b

 It needs to be :: ((forall a. IO a - IO a) - IO b) - IO b
 so that you can use 'restore' on two different pieces of IO if you need
 to. (alas, this requires not just Rank2Types but RankNTypes. Also, it
 doesn't cure the loophole. But I think it's still essential.)

 Sigh, yes I suppose that's true, but I've never encountered a case where I
 needed to call unmask more than once, let alone at different types, within
 the scope of a mask.  Anyone else?

 Cheers,
        Simon
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-10 Thread Iavor Diatchki
Hello,
I wonder if it might be possible to use just one primitive which atomically
changes the interrupt mask for a thread?  Here is an example of what I'm
thinking:

data MaskingState   = Unmasked
   | MaskedInterruptible
   | MaskedNonInterruptible

-- Atomically changes the interrupt mask for a thread, and returns the old
mask.
setMask:: MaskingState - IO MaskingState
setMask = error primitive?

-- Change the mask for the duration of an IO action.
-- The action is passed the old mask.
scopedSetMask  :: MaskingState - (MaskingState - IO a) - IO a
scopedSetMask m io  = do m1 - setMask m
a  - io m1
setMask m1
return a

-- Change the mask for the duration of an IO action.
scopedSetMask_ :: MaskingState - IO a - IO a
scopedSetMask_ m io = scopedSetMask m $ \_ -
   io
-- Simon's mask:
mask   :: ((IO a - IO a) - IO b) - IO b
mask f  = scopedSetMask MaskedInterruptible $ \m -
   f (scopedSetMask_ m)


-Iavor


On Sat, Apr 10, 2010 at 11:42 AM, Iavor Diatchki iavor.diatc...@gmail.com
wrote:
 Hello,
 It seems that rank-2 types are sufficient to make the more polymorphic
types:

 
 {-# LANGUAGE Rank2Types #-}
 import Control.Exception

 data Mask = Mask (forall a. IO a - IO a)

 mask :: (Mask - IO a) - IO a
 mask io = do
  b - blocked
  if b
then io (Mask id)
else block $ io (Mask unblock)

 restore :: Mask - IO a - IO a
 restore (Mask f) a = f a
 --

 This is useful in an example like this:

 forkThen :: IO () - IO a - IO a
 forkThen io k = mask $ \m -
  do tid - forkIO (restore m io)
 restore m k `catch` \e -
   do when (e == ThreadKilled) (killThread tid)
  throwIO e

 -Iavor


 On Thu, Apr 8, 2010 at 1:23 AM, Simon Marlow marlo...@gmail.com wrote:
 On 07/04/2010 18:54, Isaac Dupree wrote:

 On 04/07/10 11:12, Simon Marlow wrote:

 It's possible to mis-use the API, e.g.

 getUnmask = mask return

 ...incidentally,
 unmask a = mask (\restore - return restore) = (\restore - restore a)

 That doesn't work, as in it can't be used to unmask exceptions when they
are
 masked.  The 'restore' you get just restores the state to its current,
i.e.
 masked, state.

 mask :: ((IO a - IO a) - IO b) - IO b

 It needs to be :: ((forall a. IO a - IO a) - IO b) - IO b
 so that you can use 'restore' on two different pieces of IO if you need
 to. (alas, this requires not just Rank2Types but RankNTypes. Also, it
 doesn't cure the loophole. But I think it's still essential.)

 Sigh, yes I suppose that's true, but I've never encountered a case where
I
 needed to call unmask more than once, let alone at different types,
within
 the scope of a mask.  Anyone else?

 Cheers,
Simon
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-09 Thread Bertram Felgenhauer
Simon Marlow wrote:
 but they are needlessly complicated, in my opinion.  This offers the
 same functionality:
 
 mask :: ((IO a - IO a) - IO b) - IO b
 mask io = do
   b - blocked
   if b
  then io id
  else block $ io unblock

How does forkIO fit into the picture? That's one point where reasonable
code may want to unblock all exceptions unconditionally - for example to
allow the thread to be killed later.

timeout t io = block $ do
result - newEmptyMVar
tid - forkIO $ unblock (io = putMVar result)
threadDelay t `onException` killThread tid
killThread tid
tryTakeMVar result

regards,

Bertram
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-09 Thread Bas van Dijk
On Fri, Apr 9, 2010 at 3:22 AM, Isaac Dupree
m...@isaac.cedarswampstudios.org wrote:
 OK, thanks for the link!  In fact, [tell me if my reasoning is wrong...], in
 that fork-definition, the 'putMVar' will never block, because there is only
 putMVar one for each created MVar.

Yes that's correct.

 I seem to remember that any execution of
 putMVar that does not *actually* block is guaranteed not be interrupted by
 asynchronous exceptions (if within a Control.Exception.block) -- which would
 be sufficient.  Is my memory right or wrong?

The following documentation seems to suggest that any function which
_may_ itself block is defined as interruptible:

http://haskell.org/ghc/docs/latest/html/libraries/base-4.2.0.0/Control-Exception.html#13

That doesn't answer your question precisely however.

If it is the case that operations are only interruptible when they
actually block then I don't need a nonInterruptibleMask in this last
example. However I still need one in my first example because the
takeMVar in decrement may absolutely block.

regards,

Bas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-09 Thread Simon Marlow

On 09/04/2010 09:40, Bertram Felgenhauer wrote:

Simon Marlow wrote:

but they are needlessly complicated, in my opinion.  This offers the
same functionality:

mask :: ((IO a -  IO a) -  IO b) -  IO b
mask io = do
   b- blocked
   if b
  then io id
  else block $ io unblock


How does forkIO fit into the picture? That's one point where reasonable
code may want to unblock all exceptions unconditionally - for example to
allow the thread to be killed later.


Sure, and it works exactly as before in that the new thread inherits the 
masking state of its parent thread.  To unmask exceptions in the child 
thread you need to use the restore operator passed to the argument of mask.


This does mean that if you fork a thread inside mask and don't pass it 
the restore operation, then it has no way to ever unmask exceptions.  At 
worst, this means you have to pass a restore value around where you 
didn't previously.



 timeout t io = block $ do
 result- newEmptyMVar
 tid- forkIO $ unblock (io= putMVar result)
 threadDelay t `onException` killThread tid
 killThread tid
 tryTakeMVar result


This would be written

  timeout t io = mask $ \restore - do
  result- newEmptyMVar
  tid- forkIO $ restore (io= putMVar result)
  threadDelay t `onException` killThread tid
  killThread tid
  tryTakeMVar result

though the version of timeout in System.Timeout is better for various 
reasons.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-09 Thread Simon Marlow

On 08/04/2010 21:20, Tyson Whitehead wrote:

On March 26, 2010 15:51:42 Isaac Dupree wrote:

On 03/25/10 12:36, Simon Marlow wrote:

I'd also be amenable to having block/unblock count nesting levels
instead, I don't think it would be too hard to implement and it wouldn't
require any changes at the library level.


Wasn't there a reason that it didn't nest?

I think it was that operations that block-as-in-takeMVar, for an
unbounded length of time, are always supposed to C.Exception.unblock and
in fact be unblocked within that operation.  Otherwise the thread might
never receive its asynchronous exceptions.


If I'm understanding correctly here, it would be nice if you could just go

unmask :: IO a -  IO a

and always have asynchronous exceptions on for the duration of IO a.



One solution might be if unmask always enabled asynchronous exception, but, in
a masked context, stopped them from propagating beyond the boundary of the IO
a action by re-queueing them be re-raised when next allowed.

Of course you've then got the problem of umask having to produce a valid value
even when IO a was aborted, so you would have to go with something like

unmask :: a -  IO a -  IO a

where the first a gets returned if the IO a computation gets aborted by an
exception.  The original problem code would then go from


I haven't seen anyone else asking for this kind of design, and it's 
quite different to both what we have now and the new proposal.  What 
advantages would this have, do you think?


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-09 Thread Bas van Dijk
On Fri, Apr 9, 2010 at 10:40 AM, Bertram Felgenhauer
bertram.felgenha...@googlemail.com wrote:
 How does forkIO fit into the picture? That's one point where reasonable
 code may want to unblock all exceptions unconditionally - for example to
 allow the thread to be killed later.

    timeout t io = block $ do
        result - newEmptyMVar
        tid - forkIO $ unblock (io = putMVar result)
        threadDelay t `onException` killThread tid
        killThread tid
        tryTakeMVar result

The System.Timeout.timeout function is indeed problematic:

http://haskell.org/ghc/docs/latest/html/libraries/base-4.2.0.0/System-Timeout.html

To quote the documentation:

...The design of this combinator was guided by the objective that
timeout n f  should behave exactly the same as f as long as f doesn't
time out...

and

...It also possible for f to receive exceptions thrown to it by
another thread...

They seem to contradict each other because when 'f' has asynchronous
exceptions blocked 'timeout n f' should also have asynchronous
exceptions blocked because it should behave the same, however the
latter says that 'f' may always receive asynchronous exceptions.

Of course for the timeout function to work correctly 'f' should be
able to receive asynchronous exceptions otherwise it won't terminate
when the Timeout exception is asynchronously thrown to it:

timeout :: Int - IO a - IO (Maybe a)
timeout n f
| n   0= fmap Just f
| n == 0= return Nothing
| otherwise = do
pid - myThreadId
ex  - fmap Timeout newUnique
handleJust (\e - if e == ex then Just () else Nothing)
   (\_ - return Nothing)
   (bracket (forkIO (threadDelay n  throwTo pid ex))
(killThread)
(\_ - fmap Just f))

now when we rewrite 'bracket', using 'mask' so that it's not an
asynchronous exception wormhole anymore, and we apply timeout to a
computation in a thread that has asynchronous exceptions blocked the
computation won't actually timeout because it won't be able the
receive the Timeout exception.

I think we just have to live with this and explain it clearly in the
documentation of timeout that you should not call it in a masked
thread.

regards,

Bas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-09 Thread Bas van Dijk
On Wed, Apr 7, 2010 at 5:12 PM, Simon Marlow marlo...@gmail.com wrote:
 Comments?  I have a working implementation, just cleaning it up to make a
 patch.

Can you also take a look at these bugs I reported earlier:

http://hackage.haskell.org/trac/ghc/ticket/3944
http://hackage.haskell.org/trac/ghc/ticket/3945

These can also be solved with 'mask'.

regards,

Bas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-09 Thread Bertram Felgenhauer
Simon Marlow wrote:
 On 09/04/2010 09:40, Bertram Felgenhauer wrote:
 Simon Marlow wrote:
 mask :: ((IO a -  IO a) -  IO b) -  IO b
 
 How does forkIO fit into the picture? That's one point where reasonable
 code may want to unblock all exceptions unconditionally - for example to
 allow the thread to be killed later.
 
 Sure, and it works exactly as before in that the new thread inherits
 the masking state of its parent thread.  To unmask exceptions in the
 child thread you need to use the restore operator passed to the
 argument of mask.
 
 This does mean that if you fork a thread inside mask and don't pass
 it the restore operation, then it has no way to ever unmask
 exceptions.  At worst, this means you have to pass a restore value
 around where you didn't previously.
 
  timeout t io = block $ do
  result - newEmptyMVar
  tid - forkIO $ unblock (io = putMVar result)
  threadDelay t `onException` killThread tid
  killThread tid
  tryTakeMVar result
 
 This would be written
 
   timeout t io = mask $ \restore - do
   result - newEmptyMVar
   tid - forkIO $ restore (io = putMVar result)
   threadDelay t `onException` killThread tid
   killThread tid
   tryTakeMVar result

I'm worried about the case when this function is called with exceptions
already blocked. Then 'restore' will be the identity, and exceptions
will continue to be blocked inside the forked thread.

You could argue that this is the responsibility of the whole chain of
callers (who'd have to supply their own 'restore' functions that will
have to be incorporated into the 'io' action), but that goes against
modularity. In my opinion there's a valid demand for an escape hatch
out of the blocked exception state for newly forked threads.

It could be baked into a variant of the forkIO primitive, say

forkIOwithUnblock :: ((IO a - IO a) - IO b) - IO ThreadId

Kind regards,

Bertram
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-09 Thread Simon Marlow

On 08/04/2010 06:27, Dean Herington wrote:

Is there any reason not to use the more standard uninterruptible
instead of noninterruptible?


Good point, I'll change that.

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-09 Thread Simon Marlow

On 09/04/2010 10:33, Bas van Dijk wrote:

On Fri, Apr 9, 2010 at 3:22 AM, Isaac Dupree
m...@isaac.cedarswampstudios.org  wrote:

OK, thanks for the link!  In fact, [tell me if my reasoning is wrong...], in
that fork-definition, the 'putMVar' will never block, because there is only
putMVar one for each created MVar.


Yes that's correct.


I seem to remember that any execution of
putMVar that does not *actually* block is guaranteed not be interrupted by
asynchronous exceptions (if within a Control.Exception.block) -- which would
be sufficient.  Is my memory right or wrong?


The following documentation seems to suggest that any function which
_may_ itself block is defined as interruptible:

http://haskell.org/ghc/docs/latest/html/libraries/base-4.2.0.0/Control-Exception.html#13

That doesn't answer your question precisely however.


The semantics in our original paper[1] does indeed behave as Isaac 
described: only if an operation really blocks is it interruptible.


However, we've already changed this for throwTo, and potentially we 
might want to change it for other opertions too.  It's tricky to keep 
that behaviour in a multithreaded runtime, because it introduces extra 
complexity to distinguish between waiting for a response to a message 
from another CPU and actually blocking waiting for the other CPU to do 
something.  A concrete example of this is throwTo, which technically 
should only block if the target thread is inside 'mask', but in practice 
blocks if the target thread is running on another CPU and the current 
CPU has just sent a message to the other CPU to request a throwTo. 
(this is only in GHC 6.14 where we've changed the throwTo protocol to be 
message-based rather than the previous complicated arrangement of locks).


Still, I think I'd advocate using STM and/or atomicModifyIORef in cases 
like this, where it is much easier to write code that is guaranteed not 
to block and hence not be interruptible.


Cheers,
Simon

[1] http://www.haskell.org/~simonmar/papers/async.pdf
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-08 Thread Simon Marlow

On 07/04/2010 18:54, Isaac Dupree wrote:

On 04/07/10 11:12, Simon Marlow wrote:

It's possible to mis-use the API, e.g.

getUnmask = mask return


...incidentally,
unmask a = mask (\restore - return restore) = (\restore - restore a)


That doesn't work, as in it can't be used to unmask exceptions when they 
are masked.  The 'restore' you get just restores the state to its 
current, i.e. masked, state.



mask :: ((IO a - IO a) - IO b) - IO b


It needs to be :: ((forall a. IO a - IO a) - IO b) - IO b
so that you can use 'restore' on two different pieces of IO if you need
to. (alas, this requires not just Rank2Types but RankNTypes. Also, it
doesn't cure the loophole. But I think it's still essential.)


Sigh, yes I suppose that's true, but I've never encountered a case where 
I needed to call unmask more than once, let alone at different types, 
within the scope of a mask.  Anyone else?


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-08 Thread Isaac Dupree

On 04/08/10 04:23, Simon Marlow wrote:

On 07/04/2010 18:54, Isaac Dupree wrote:

On 04/07/10 11:12, Simon Marlow wrote:

It's possible to mis-use the API, e.g.

getUnmask = mask return


...incidentally,
unmask a = mask (\restore - return restore) = (\restore - restore a)


That doesn't work, as in it can't be used to unmask exceptions when they
are masked. The 'restore' you get just restores the state to its
current, i.e. masked, state.


oh good point. Then you're right, anyone who was trying to work around 
it would be doing something obviously wrong.



mask :: ((IO a - IO a) - IO b) - IO b


It needs to be :: ((forall a. IO a - IO a) - IO b) - IO b
so that you can use 'restore' on two different pieces of IO if you need
to. (alas, this requires not just Rank2Types but RankNTypes. Also, it
doesn't cure the loophole. But I think it's still essential.)


Sigh, yes I suppose that's true, but I've never encountered a case where
I needed to call unmask more than once, let alone at different types,
within the scope of a mask. Anyone else?


FWIW, the function I made up to test my theory was as below; I haven't 
thought of any actual uses yet:


finally2 :: IO a1 - IO a2 - IO b - IO (a1, a2)
finally2 a1 a2 b =
  mask $ \restore - do
r - (liftM2 (,) (restore a1) (restore a2)) `onException` b
b
return r

-Isaac
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-08 Thread Isaac Dupree

On 04/07/10 17:50, Simon Marlow wrote:

On 07/04/10 21:23, Bas van Dijk wrote:

On Wed, Apr 7, 2010 at 5:12 PM, Simon Marlowmarlo...@gmail.com wrote:

Comments?


I really like this design.

One question, are you planning to write the MVar utility functions
using 'mask' or using 'nonInterruptibleMask'? As in:


withMVar :: MVar a - (a - IO b) - IO b
withMVar m f = whichMask? $ \restore - do
a- takeMVar m
b- restore (f a) `onException` putMVar m a
putMVar m a
return b


Definitely the ordinary interruptible mask. It is the intention that the
new nonInterruptibleMask is only used in exceptional circumstances where
dealing with asynchronous exceptions emerging from blocking operations
would be impossible to deal with. The more unwieldy name was chosen
deliberately for this reason.

The danger with nonInterruptibleMask is that it is all too easy to write
a program that will be unresponsive to Ctrl-C, for example. It should be
used with great care - for example when there is reason to believe that
any blocking operations that would otherwise be interruptible will only
block for short bounded periods.


it could be called unsafeNonInterruptibleMask 
(unsafeUninterruptibleMask?)... after all, 'mask' is uninterruptible for 
most/many operations, that's its point, but if we put 'mask' and 
'nonInterruptibleMask' next to each other, I think people are likely to 
be confused (..less so if there's good Haddock documentation. But i'm 
fearing the 'forkOS' debacle where people still wrongly recommend that 
because the name sounds good...)


I still would like to see examples of where it's needed, because I 
slightly suspect that wrapping possibly-blocking operations in an 
exception handler that does something appropriate, along with ordinary 
'mask', might be sufficient... But I expect to be proved wrong; I just 
haven't figured out how to prove myself wrong.


-Isaac
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-08 Thread Bas van Dijk
On Thu, Apr 8, 2010 at 9:15 PM, Isaac Dupree
m...@isaac.cedarswampstudios.org wrote:
 I still would like to see examples of where it's needed, because I slightly
 suspect that wrapping possibly-blocking operations in an exception handler
 that does something appropriate, along with ordinary 'mask', might be
 sufficient... But I expect to be proved wrong; I just haven't figured out
 how to prove myself wrong.

Take my threads package, I uploaded to hackage yesterday, as an example.

In Control.Concurrent.Thread.Group.fork I first increment numThreads
before forking. When the fork thread terminates it should decrement
numThreads and release the lock when it reaches zero so that potential
waiters are woken up:

http://hackage.haskell.org/packages/archive/threads/0.1/doc/html/src/Control-Concurrent-Thread-Group.html#fork

Now if an asynchronous exception is thrown during the takeMVar in
decrement there's no way I can prevent not releasing the lock causing
the waiters to deadlock.

This could be solved by wrapping decrement in nonInterruptibleMask.

regards,

Bas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-08 Thread Bas van Dijk
On Thu, Apr 8, 2010 at 11:45 PM, Bas van Dijk v.dijk@gmail.com wrote:
 On Thu, Apr 8, 2010 at 9:15 PM, Isaac Dupree
 m...@isaac.cedarswampstudios.org wrote:
 I still would like to see examples of where it's needed, because I slightly
 suspect that wrapping possibly-blocking operations in an exception handler
 that does something appropriate, along with ordinary 'mask', might be
 sufficient... But I expect to be proved wrong; I just haven't figured out
 how to prove myself wrong.

 Take my threads package, I uploaded to hackage yesterday, as an example.

 In Control.Concurrent.Thread.Group.fork...

Control.Concurrent.Thread.fork is a similar and simpler example of why
nonInterruptibleMask is needed:

http://hackage.haskell.org/packages/archive/threads/0.1/doc/html/src/Control-Concurrent-Thread.html#fork

If an asynchronous exception is thrown during the 'putMVar res' any
waiters on the thread will never be woken up.

regards,

Bas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-08 Thread Isaac Dupree

On 04/08/10 19:56, Bas van Dijk wrote:

Control.Concurrent.Thread.fork is a similar and simpler example of why
nonInterruptibleMask is needed:

http://hackage.haskell.org/packages/archive/threads/0.1/doc/html/src/Control-Concurrent-Thread.html#fork

If an asynchronous exception is thrown during the 'putMVar res' any
waiters on the thread will never be woken up.


OK, thanks for the link!  In fact, [tell me if my reasoning is 
wrong...], in that fork-definition, the 'putMVar' will never block, 
because there is only putMVar one for each created MVar.  I seem to 
remember that any execution of putMVar that does not *actually* block is 
guaranteed not be interrupted by asynchronous exceptions (if within a 
Control.Exception.block) -- which would be sufficient.  Is my memory 
right or wrong?


-Isaac
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-07 Thread Simon Marlow

On 25/03/2010 23:16, Bas van Dijk wrote:

On Thu, Mar 25, 2010 at 11:23 PM, Simon Marlowmarlo...@gmail.com  wrote:

So I'm all for deprecating 'block' in favor of 'mask'. However what do
we call 'unblock'? 'unmask' maybe? However when we have:

mask $ mask $ unmask x

and these operations have the counting nesting levels semantics,
asynchronous exception will not be unmasked in 'x'. However I don't
currently know of a nicer alternative.


But that's the semantics you wanted, isn't it?  Am I missing something?


Yes I like the nesting semantics that Twan proposed.

But with regard to naming, I think the name 'unmask' is a bit
misleading because it doesn't unmask asynchronous exceptions. What it
does is remove a layer of masking so to speak. I think the names of
the functions should reflect the nesting or stacking behavior. Maybe
something like:

addMaskingLayer :: IO a -  IO a
removeMaskingLayer :: IO a -  IO a
nrOfMaskingLayers :: IO Int

However I do find those a bit long and ugly...


I've been thinking some more about this, and I have a new proposal.

I came to the conclusion that counting nesting layers doesn't solve the 
problem: the wormhole still exists in the form of nested unmasks.  That 
is, a library function could always escape out of a masked context by 
writing


  unmask $ unmask $ unmask $ ...

enough times.

The functions blockedApply and blockedApply2 proposed by Bas van Dijk 
earlier solve this problem:


blockedApply :: IO a - (IO a - IO b) - IO b
blockedApply a f = do
  b - blocked
  if b
then f a
else block $ f $ unblock a

blockedApply2 :: (c - IO a) - ((c - IO a) - IO b) - IO b
blockedApply2 g f = do
  b - blocked
  if b
then f g
else block $ f $ unblock . g

but they are needlessly complicated, in my opinion.  This offers the 
same functionality:


mask :: ((IO a - IO a) - IO b) - IO b
mask io = do
  b - blocked
  if b
 then io id
 else block $ io unblock

to be used like this:

a `finally` b =
  mask $ \restore - do
r - restore a `onException` b
b
return r

So the property we want is that if I call a library function

  mask $ \_ - call_library_function

then there's no way that the library function can unmask exceptions.  If 
all they have access to is 'mask', then that's true.


It's possible to mis-use the API, e.g.

  getUnmask = mask return

but this is also possible using blockedApply, it's just a bit harder:

  getUnmask = do
m - newEmptyMVar
f - blockedApply (join $ takeMVar m) return
return (\io - putMVar m io  f)

To prevent these kind of shennanigans would need a parametricity trick 
like the ST monad.  I don't think it's a big problem that you can do 
this, as long as (a) we can explain why it's a bad idea in the docs, and 
(b) we can still give a semantics to it, which we can.


So in summary, my proposal for the API is:

  mask  :: ((IO a - IO a) - IO b) - IO b
  -- as above

  mask_ :: IO a - IO a
  mask_ io = mask $ \_ - io

and additionally:

  nonInterruptibleMask  :: ((IO a - IO a) - IO b) - IO b
  nonInterruptibleMask_ :: IO a - IO a

which is just like mask/mask_, except that blocking operations (e.g. 
takeMVar) are not interruptible.  Nesting mask inside 
nonInterruptibleMask has no effect.  The new version of 'blocked' would be:


   data MaskingState = Unmasked
 | MaskedInterruptible
 | MaskedNonInterruptible

  getMaskingState :: IO MaskingState

Comments?  I have a working implementation, just cleaning it up to make 
a patch.


Cheers,
Simon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-07 Thread Sittampalam, Ganesh
Simon Marlow wrote:

 I came to the conclusion that counting nesting layers doesn't solve
 the problem: the wormhole still exists in the form of nested unmasks.
 That is, a library function could always escape out of a masked
 context by writing
 
unmask $ unmask $ unmask $ ...
 
 enough times.
[...]
 mask :: ((IO a - IO a) - IO b) - IO b
 mask io = do
b - blocked
if b
   then io id
   else block $ io unblock
 
 to be used like this:
 
 a `finally` b =
mask $ \restore - do
  r - restore a `onException` b
  b
  return r
 
 So the property we want is that if I call a library function
 
mask $ \_ - call_library_function
 
 then there's no way that the library function can unmask exceptions. 
 If all they have access to is 'mask', then that's true.
[...]
 It's possible to mis-use the API, e.g.
 
getUnmask = mask return

Given that both the simple mask/unmask and your alternate proposal
have backdoors, is the extra complexity really worth it?

The problem with the existing API is that it's not possible even for
well-behaved library code to use block/unblock without screwing up
callers. With the simple mask/unmask, the rule is simply that you don't
call unmask except within the context of your own mask calls.

Ganesh

=== 
Please access the attached hyperlink for an important electronic communications 
disclaimer: 
http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html 
=== 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-07 Thread Simon Marlow

On 07/04/2010 16:20, Sittampalam, Ganesh wrote:

Simon Marlow wrote:


I came to the conclusion that counting nesting layers doesn't solve
the problem: the wormhole still exists in the form of nested unmasks.
That is, a library function could always escape out of a masked
context by writing

unmask $ unmask $ unmask $ ...

enough times.

[...]

mask :: ((IO a -  IO a) -  IO b) -  IO b
mask io = do
b- blocked
if b
   then io id
   else block $ io unblock

to be used like this:

a `finally` b =
mask $ \restore -  do
  r- restore a `onException` b
  b
  return r

So the property we want is that if I call a library function

mask $ \_ -  call_library_function

then there's no way that the library function can unmask exceptions.
If all they have access to is 'mask', then that's true.

[...]

It's possible to mis-use the API, e.g.

getUnmask = mask return


Given that both the simple mask/unmask and your alternate proposal
have backdoors, is the extra complexity really worth it?


The answer is yes, for a couple of reasons.

 1. this version really is safer than mask/unmask that count
nesting levels.  If the caller is playing by the rules,
then a library function can't unmask exceptions.  The
responsibility not to screw up is in the hands of the
caller, not the callee: that's an improvement.

 2. in this version more of the code is in Haskell, and
the primitives and RTS implementation are simpler.  So
actually I consider this less complex than counting
nesting levels.

I did implement the nesting levels version first, and when adding 
non-interruptibility to the mix things got quite hairy.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-07 Thread Isaac Dupree

On 04/07/10 11:12, Simon Marlow wrote:

It's possible to mis-use the API, e.g.

getUnmask = mask return


...incidentally,
unmask a = mask (\restore - return restore) = (\restore - restore a)


mask :: ((IO a - IO a) - IO b) - IO b


It needs to be :: ((forall a. IO a - IO a) - IO b) - IO b
so that you can use 'restore' on two different pieces of IO if you need 
to. (alas, this requires not just Rank2Types but RankNTypes. Also, it 
doesn't cure the loophole. But I think it's still essential.)



nonInterruptibleMask :: ((IO a - IO a) - IO b) - IO b
nonInterruptibleMask_ :: IO a - IO a

which is just like mask/mask_, except that blocking operations (e.g.
takeMVar) are not interruptible.


What would be an appropriate use of this?

-Isaac
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-07 Thread Bas van Dijk
On Wed, Apr 7, 2010 at 5:12 PM, Simon Marlow marlo...@gmail.com wrote:
 Comments?

I really like this design.

One question, are you planning to write the MVar utility functions
using 'mask' or using 'nonInterruptibleMask'? As in:

 withMVar :: MVar a - (a - IO b) - IO b
 withMVar m f = whichMask? $ \restore - do
   a - takeMVar m
   b - restore (f a) `onException` putMVar m a
   putMVar m a
   return b

regards,

Bas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-07 Thread Simon Marlow

On 07/04/10 21:23, Bas van Dijk wrote:

On Wed, Apr 7, 2010 at 5:12 PM, Simon Marlowmarlo...@gmail.com  wrote:

Comments?


I really like this design.

One question, are you planning to write the MVar utility functions
using 'mask' or using 'nonInterruptibleMask'? As in:


withMVar :: MVar a -  (a -  IO b) -  IO b
withMVar m f = whichMask? $ \restore -  do
   a- takeMVar m
   b- restore (f a) `onException` putMVar m a
   putMVar m a
   return b


Definitely the ordinary interruptible mask.  It is the intention that 
the new nonInterruptibleMask is only used in exceptional circumstances 
where dealing with asynchronous exceptions emerging from blocking 
operations would be impossible to deal with.  The more unwieldy name was 
chosen deliberately for this reason.


The danger with nonInterruptibleMask is that it is all too easy to write 
a program that will be unresponsive to Ctrl-C, for example.  It should 
be used with great care - for example when there is reason to believe 
that any blocking operations that would otherwise be interruptible will 
only block for short bounded periods.


In the case of withMVar, if the caller is concerned about the 
interruptibility then they can call it within nonInterruptibleMask, 
which overrides the interruptible mask in withMVar.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-04-07 Thread Dean Herington
Is there any reason not to use the more standard uninterruptible 
instead of noninterruptible?


Dean

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-03-29 Thread Simon Marlow

On 26/03/2010 19:51, Isaac Dupree wrote:

On 03/25/10 12:36, Simon Marlow wrote:

I'd also be amenable to having block/unblock count nesting levels
instead, I don't think it would be too hard to implement and it wouldn't
require any changes at the library level.


Wasn't there a reason that it didn't nest?

I think it was that operations that block-as-in-takeMVar, for an
unbounded length of time, are always supposed to C.Exception.unblock and
in fact be unblocked within that operation. Otherwise the thread might
never receive its asynchronous exceptions.


That's why we have the notion of interruptible operations: any 
operation that blocks for an unbounded amount of time is treated as 
interruptible and can receive asynchronous exceptions.


I think of block as a way to turn asynchronous exceptions into 
synchronous ones.  So rather that having to worry that an asynchronous 
exception may strike at any point, you only have to worry about them 
being throw by blocking operations.  If in doubt you should think of 
every library function as potentially interruptible, but that still 
means you usually have enough control over asynchronous exceptions to 
avoid problems.


If things get really hairy, consider using STM instead.  In STM an 
asynchronous exception causes a rollback, so maintaining your invariants 
is trivial - this is arguably one of the main benefits of STM.  There's 
no need for block/unblock within STM transactions.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-03-29 Thread Andy Stewart
Simon Marlow marlo...@gmail.com writes:

 On 26/03/2010 19:51, Isaac Dupree wrote:
 On 03/25/10 12:36, Simon Marlow wrote:
 I'd also be amenable to having block/unblock count nesting levels
 instead, I don't think it would be too hard to implement and it wouldn't
 require any changes at the library level.

 Wasn't there a reason that it didn't nest?

 I think it was that operations that block-as-in-takeMVar, for an
 unbounded length of time, are always supposed to C.Exception.unblock and
 in fact be unblocked within that operation. Otherwise the thread might
 never receive its asynchronous exceptions.

 That's why we have the notion of interruptible operations: any operation 
 that blocks for an
 unbounded amount of time is treated as interruptible and can receive 
 asynchronous exceptions.

 I think of block as a way to turn asynchronous exceptions into synchronous 
 ones.  So rather that
 having to worry that an asynchronous exception may strike at any point, you 
 only have to worry about
 them being throw by blocking operations.  If in doubt you should think of 
 every library function as
 potentially interruptible, but that still means you usually have enough 
 control over asynchronous
 exceptions to avoid problems.

 If things get really hairy, consider using STM instead.  In STM an 
 asynchronous exception causes a
 rollback, so maintaining your invariants is trivial - this is arguably one of 
 the main benefits of
 STM.  There's no need for block/unblock within STM transactions.
Hi Isaac,

Yep, like Simon said, use STM instead.

STM will build thread-log for every TVar, in normal, every thread just
execute STM code, and don't care other threads state, current thread will
check thread-log when TVar need write, if okay, it will write value,
if have conflict with other threads, current action will rollback and
re-execute.

And STM's usage is simple enough, so i recommend use STM fix your
problem.

Cheers,

  -- Andy

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-03-26 Thread Isaac Dupree

On 03/25/10 12:36, Simon Marlow wrote:

I'd also be amenable to having block/unblock count nesting levels
instead, I don't think it would be too hard to implement and it wouldn't
require any changes at the library level.


Wasn't there a reason that it didn't nest?

I think it was that operations that block-as-in-takeMVar, for an 
unbounded length of time, are always supposed to C.Exception.unblock and 
in fact be unblocked within that operation.  Otherwise the thread might 
never receive its asynchronous exceptions.


Thus, if within a C.Exception.block, you call an IO that might block, 
such as takeMVar, modifyMVar_, or an arbitrary IO x argument to your 
function, then you must take care to handle its exceptions (for example, 
in the case of modifyMVar_'s implementation, to rollback the MVar to its 
previous value).


-Isaac
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-03-25 Thread Simon Marlow

On 25/03/2010 11:57, Bas van Dijk wrote:

Dear all, (sorry for this long mail)

When programming in the IO monad you have to be careful about
asynchronous exceptions. These nasty little worms can be thrown to you
at any point in your IO computation. You have to be extra careful when
doing, what must be, an atomic transaction like:

do old- takeMVar m
new- f old `onException` putMVar m old
putMVar m new

If an asynchronous exception is thrown to you right after you have
taken your MVar the putMVar will not be executed anymore and will
leave your MVar in the empty state. This can possibly lead to
dead-lock.

The standard solution for this is to use a function like modifyMVar_:

modifyMVar_ :: MVar a -  (a -  IO a) -  IO ()
modifyMVar_ m io =
   block $ do
 a- takeMVar m
 a'- unblock (io a) `onException` putMVar m a
 putMVar m a'

As you can see this will first block asynchronous exceptions before
taking the MVar.

It is usually better to be in the blocked state as short as possible
to ensure that asynchronous exceptions can be handled as soon as
possible. This is why modifyMVar_ unblocks the the inner (io a).

However now comes the problem I would like to talk about. What if I
want to use modifyMVar_ as part of a bigger atomic transaction. As in:

block $ do ...
modifyMVar_ m f
...


From a quick glanse at this code it looks like asynchronous exceptions

can't be thrown to this transaction because we block them. However the
unblock in modifyMVar_ opens an asynchronous exception wormhole
right into our blocked computation. This destroys modularity.

Besides modifyMVar_ the following functions suffer the same problem:

* Control.Exception.finally/bracket/bracketOnError
* Control.Concurrent.MVar.withMVar/modifyMVar_/modifyMVar
* Foreign.Marshal.Pool.withPool

We can solve it by introducing two handy functions 'blockedApply' and
'blockedApply2' and wrapping each of the operations in them:


import Control.Exception
import Control.Concurrent.MVar
import Foreign.Marshal.Pool
import GHC.IO ( catchAny )




blockedApply :: IO a -  (IO a -  IO b) -  IO b
blockedApply a f = do
   b- blocked
   if b
 then f a
 else block $ f $ unblock a



blockedApply2 :: (c -  IO a) -  ((c -  IO a) -  IO b) -  IO b
blockedApply2 g f = do
   b- blocked
   if b
 then f g
 else block $ f $ unblock . g


Nice, I hadn't noticed that you can now code this up in the library 
since we added 'blocked'.  Unfortunately this isn't cheap: 'blocked' is 
currently an out-of-line call to the RTS, so if we want to start using 
it for important things like finally and bracket, then we should put 
some effort into optimising it.


I'd also be amenable to having block/unblock count nesting levels 
instead, I don't think it would be too hard to implement and it wouldn't 
require any changes at the library level.


Incedentally, I've been using the term mask rather than block in 
this context, as block is far too overloaded.  It would be nice to 
change the terminology in the library too, leaving the old functions 
around for backwards compatibility of course.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-03-25 Thread Bas van Dijk
On Thu, Mar 25, 2010 at 5:36 PM, Simon Marlow marlo...@gmail.com wrote:
 Nice, I hadn't noticed that you can now code this up in the library since we
 added 'blocked'.  Unfortunately this isn't cheap: 'blocked' is currently an
 out-of-line call to the RTS, so if we want to start using it for important
 things like finally and bracket, then we should put some effort into
 optimising it.

 I'd also be amenable to having block/unblock count nesting levels instead, I
 don't think it would be too hard to implement and it wouldn't require any
 changes at the library level.


Yes counting the nesting level like Twan proposed will definitely
solve the modularity problem.

I do think we need to optimize the block and unblock operations in
such a way that they don't need to use IORefs to save the counting
level. The version Twan posted requires 2 reads and 2 writes for a
block and unblock. While I haven't profiled it I think it's not very
efficient.

 Incedentally, I've been using the term mask rather than block in this
 context, as block is far too overloaded.  It would be nice to change the
 terminology in the library too, leaving the old functions around for
 backwards compatibility of course.

Indeed block is to overloaded. I've been using block and unblock a
lot in concurrent-extra[1] and because this package deals with threads
that can block it sometimes is confusing whether a block refers to
thread blocking or asynchronous exceptions blocking.

So I'm all for deprecating 'block' in favor of 'mask'. However what do
we call 'unblock'? 'unmask' maybe? However when we have:

mask $ mask $ unmask x

and these operations have the counting nesting levels semantics,
asynchronous exception will not be unmasked in 'x'. However I don't
currently know of a nicer alternative.

BTW I also use the 'blockedApply' function in the
Control.Concurrent.Thread.forkIO function of concurrent-extra:

http://code.haskell.org/concurrent-extra/Control/Concurrent/Thread.hs

forkIO ∷ IO α → IO (ThreadId α)
forkIO = fork Conc.forkIO

fork ∷ (IO () → IO Conc.ThreadId) → IO α → IO (ThreadId α)
fork doFork act = do
  stop ← newEmptyMVar
  fmap (ThreadId stop) $ blockedApply act $ \a → doFork $ try a = putMVar stop

Here I use it to ensure that the forked IO computation keeps the same
blocked status as the parent thread. So that this forkIO has the same
semantics as the standard forkIO.

regards,

Bas

[1] http://hackage.haskell.org/package/concurrent-extra
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-03-25 Thread Steve Schafer
On Thu, 25 Mar 2010 18:16:07 +0100, you wrote:

Yes counting the nesting level like Twan proposed will definitely
solve the modularity problem.

I do think we need to optimize the block and unblock operations in
such a way that they don't need to use IORefs to save the counting
level. The version Twan posted requires 2 reads and 2 writes for a
block and unblock. While I haven't profiled it I think it's not very
efficient.

Wouldn't you be better off using real transaction processing (i.e.,
with rollback)? That preserves the greatest possible modularity, because
the lower level operations don't have to worry about failures at all.
You generally only care about atomicity at some outer, observable
level; there is rarely any point in worrying about nested atomicity.

Steve Schafer
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-03-25 Thread Simon Marlow

On 25/03/10 17:16, Bas van Dijk wrote:

On Thu, Mar 25, 2010 at 5:36 PM, Simon Marlowmarlo...@gmail.com  wrote:

Nice, I hadn't noticed that you can now code this up in the library since we
added 'blocked'.  Unfortunately this isn't cheap: 'blocked' is currently an
out-of-line call to the RTS, so if we want to start using it for important
things like finally and bracket, then we should put some effort into
optimising it.

I'd also be amenable to having block/unblock count nesting levels instead, I
don't think it would be too hard to implement and it wouldn't require any
changes at the library level.



Yes counting the nesting level like Twan proposed will definitely
solve the modularity problem.

I do think we need to optimize the block and unblock operations in
such a way that they don't need to use IORefs to save the counting
level. The version Twan posted requires 2 reads and 2 writes for a
block and unblock. While I haven't profiled it I think it's not very
efficient.


Oh, I thought that was pseudocode to illustrate the idea.  Where would 
you store the IORef, for one thing?  No, I think the only sensible way 
is to build the nesting semantics into the primitives.



Incedentally, I've been using the term mask rather than block in this
context, as block is far too overloaded.  It would be nice to change the
terminology in the library too, leaving the old functions around for
backwards compatibility of course.


Indeed block is to overloaded. I've been using block and unblock a
lot in concurrent-extra[1] and because this package deals with threads
that can block it sometimes is confusing whether a block refers to
thread blocking or asynchronous exceptions blocking.

So I'm all for deprecating 'block' in favor of 'mask'. However what do
we call 'unblock'? 'unmask' maybe? However when we have:

mask $ mask $ unmask x

and these operations have the counting nesting levels semantics,
asynchronous exception will not be unmasked in 'x'. However I don't
currently know of a nicer alternative.


But that's the semantics you wanted, isn't it?  Am I missing something?

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-03-25 Thread Bas van Dijk
On Thu, Mar 25, 2010 at 11:23 PM, Simon Marlow marlo...@gmail.com wrote:
 So I'm all for deprecating 'block' in favor of 'mask'. However what do
 we call 'unblock'? 'unmask' maybe? However when we have:

 mask $ mask $ unmask x

 and these operations have the counting nesting levels semantics,
 asynchronous exception will not be unmasked in 'x'. However I don't
 currently know of a nicer alternative.

 But that's the semantics you wanted, isn't it?  Am I missing something?

Yes I like the nesting semantics that Twan proposed.

But with regard to naming, I think the name 'unmask' is a bit
misleading because it doesn't unmask asynchronous exceptions. What it
does is remove a layer of masking so to speak. I think the names of
the functions should reflect the nesting or stacking behavior. Maybe
something like:

addMaskingLayer :: IO a - IO a
removeMaskingLayer :: IO a - IO a
nrOfMaskingLayers :: IO Int

However I do find those a bit long and ugly...

regards,

Bas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Asynchronous exception wormholes kill modularity

2010-03-25 Thread Brandon S. Allbery KF8NH

On Mar 25, 2010, at 19:16 , Bas van Dijk wrote:

But with regard to naming, I think the name 'unmask' is a bit
misleading because it doesn't unmask asynchronous exceptions. What it
does is remove a layer of masking so to speak. I think the names of
the functions should reflect the nesting or stacking behavior. Maybe
something like:

addMaskingLayer :: IO a - IO a
removeMaskingLayer :: IO a - IO a
nrOfMaskingLayers :: IO Int

However I do find those a bit long and ugly...



maskP, maskV, masking.  :)

--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allb...@kf8nh.com
system administrator [openafs,heimdal,too many hats] allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon universityKF8NH




PGP.sig
Description: This is a digitally signed message part
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe