Re: [Haskell-cafe] decoupling type classes

2012-01-14 Thread Yin Wang
 Also, you don't seem to have thought about the question of parametric
 instances: do you allow them or not, if you do, what computational
 power do they get etc.?

I may or may not have thought about it. Maybe you can give an example
of parametric instances where there could be problems, so that I can
figure out whether my system works on the example or not.


 I'm surprised that you propose passing all type class methods
 separately. It seems to me that for many type classes, you want to
 impose a certain correspondence between the types of the different
 methods in a type class (for example, for the Monad class, you would
 expect return to be of type (a - m a) if (=) is of type (m a - (a
 - m b) - m b)). I would expect that inferencing these releations in
 each function that uses either of the methods will lead to overly
 general inferenced types and the need for more guidance to the type
 inferencer?

I thought they should be of type (a - m a) and (m a - (a - m b) -
m b)), but I just found that as if they should also work if they were
of type (c - m c) and (m a - (a - m b) - m b)).

It doesn't seem to really hurt. We either will have actually types
when they are called (thus catches type errors). Or if they stay
polymorphic, c will be unified with a when they bind. Also, return and
(=) will be dispatched to correct instances just as before.


 By separating the methods, you would also lose the laws that associate
 methods in a type class, right?

 An alternative to what you suggest, is the approach I recommend for
 using instance arguments: wrapping all the methods in a standard data
 type (i.e. define the dictionary explicitly), and pass this around as
 an implicit argument.

I went quickly through your paper and manual and I like the explicit
way. The examples show that the records seem to be a good way to group
the overloaded functions, so I have the impression that grouping and
overloading are orthogonal features. But in your paper I haven't
seen any overloaded functions outside of records, so I guess they are
somehow tied together in your implementation, which is not necessary.

Maybe we can let the user to choose to group or not. If they want to
group and force further constraints among the overloaded functions,
they can use overloaded records and access the functions through the
records; otherwise, they can define overloaded functions separately
and just use them directly. This way also makes the implementation
more modular.


 For this example, one might also argue that the problem is in fact
 that the Num type class is too narrow, and + should instead be defined
 in a parent type class (Monoid comes to mind) together with 0 (which
 also makes sense for strings, by the way)?

I guess more hierarchies solves only some of the problem like this,
but in general this way complicates things, because the overloaded
functions are not in essence related.


 There is another benefit of this decoupling: it can subsume the
 functionality of MPTC. Because the methods are no longer grouped,
 there is no “common” type parameter to the methods. Thus we can easily
 have more than one parameter in the individual methods and
 conveniently use them as MPTC methods.

 Could you explain this a bit further?

In my system, there are no explicit declarations containing type
variables. The declaration overload g is all that is needed.

For example,

overload g
 ... ...
f x (Int y) = g x y


then, f has the inferred type:

'a - Int - {{ g:: 'a - Int - 'b }} - 'b

(I borrowed your notation here.)

Here it automatically infers the type for g ('a - Int - 'b) just
from its _usage_ inside f, as if there were a type class definition
like:

class G a where
  g :: a - Int - b

So not only we don't need to defined type classes, we don't even need
to declare the principle types of the overloaded functions. We can
infer them from their usage and they don't even need to have the same
principle type! All it takes is:

overload g

And even this is not really necessary. It is for sanity purposes - to
avoid inadvertent overloading.

So if g is used as:

f x y (Int z) = g x z y

then f has type 'a - 'b - Int - {{ g :: 'a - Int - 'b - 'c}} - 'c

Then g will be equivalent to the one you would have defined in a MPTC method.


 I would definitely argue against treating undefined variables as
 overloaded automatically. It seems this will lead to strange errors if
 you write typo's for example.

I agree, thus I will keep the overload keyword and check that the
unbound variables have been declared as overloaded before generating
the implicit argument.


 But the automatic overloading of the undefined may be useful in
 certain situations. For example, if we are going to use Haskell as a
 shell language. Every “command” must be evaluated when we type them.
 If we have mutually recursive definitions, the shell will report
 “undefined variables” either way we order the functions. The automatic
 overloading may solve this problem. The undefined 

Re: [Haskell-cafe] How to make asynchronous I/O composable and safe?

2012-01-14 Thread Joey Adams
On Sat, Jan 14, 2012 at 1:29 AM, Bardur Arantsson s...@scientician.net wrote:
 So, the API becomes something like:

   runSocketServer :: ((Chan a, Chan b) - IO ()) - ... - IO ()

 where the first parameter contains the client logic and A is the type of
 the messages from the client and B is the type of the messages which are
 sent back to the client.

Thanks, that's a good idea.  Even if I only plan to receive in one
thread, placing the messages in a Chan or TChan helps separate my
application thread from the complexities of connection management.

Is there something on Hackage that will do this for me?  Or will I
need to roll my own?  Namely, convert a network connection to a pair
of channels, and close the connection automatically.  Something like
this:

-- | Spawn two threads, one which populates the first channel with messages
-- from the other host, and another which reads the second channel and sends
-- its messages to the other host.
--
-- Run the given computation, passing it these channels.  When the
computation
-- completes (or throws an exception), sending and receiving will
stop, and the
-- connection will be closed.
--
-- If either the receiving thread or sending thread encounter an exception,
-- sending and receiving will stop, and an asynchronous exception will be
-- thrown to your thread.
channelize :: IO msg_in -- ^ Receive callback
   - (msg_out - IO () -- ^ Send callback
   - IO () -- ^ Close callback
   - (TChan msg_in - TChan msg_out - IO a)
-- ^ Inner computation
   - IO a

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] named pipe interface

2012-01-14 Thread Serge D. Mechveliani
On Fri, Jan 13, 2012 at 12:19:34PM -0800, Donn Cave wrote:
 Quoth Serge D. Mechveliani mech...@botik.ru,
 ...
  Initially, I did the example by the Foreign Function Interface for C.
  But then, I thought But this is unnatural! Use plainly the standard
  Haskell IO, it has everything.
 
  So, your advice is return to FFI ?
 
 Well, it turns out that the I/O system functions in System.Posix.IO
 may work for your purposes.  I was able to get your example to work
 with these functions, which correspond to open(2), read(2), write(2).
 
 I would also use these functions in C, as you did in your C program.
 Haskell I/O functions like hGetLine are analogous to C library I/O
 like fgets(3) - in particular, they're buffered, and I would guess
 that's why they don't work for you here.
 
 Specifically,
openFile toA WriteOnly Nothing defaultFileFlags
openFile fromA ReadOnly Nothing defaultFileFlags
 
fdWrite toA str
(str, len) - fdRead fromA 64
return str


Great! Thank you very much.
As I find,  Posix.IO  is not of the standard, but it is of GHC.
Anyway, it fits my purpose.
By  openFile  you, probably, mean  openFd.

Another point is the number of open files, for a long loop.
I put
  toA_IO   = openFd toA   WriteOnly Nothing defaultFileFlags
  fromA_IO = openFd fromA ReadOnly  Nothing defaultFileFlags

  axiomIO :: String - IO String
  axiomIO str = do
toA   - toA_IO   
fromA - fromA_IO
fdWrite toA str
(str, _len) - fdRead fromA 64
return str

When applying  axiomIO  in a loop of 9000 strings, it breaks:
too many open files.
I do not understand why it is so, because  toA_IO and fromA_IO  are 
global constants (I have not any experience with `do').

Anyway, I have changed this to

  toA   = unsafePerformIO toA_IO
  fromA = unsafePerformIO fromA_IO
  axiomIO :: String - IO String
  axiomIO str = do
fdWrite toA str
(str, _len) - fdRead fromA 64
return str

And now, it works in loong loops too
(I need to understand further whether my usage of  unsafePerformIO  
really damages the project).
Its performance is  9/10  of the  C - C  performance 
(ghc -O, gcc -O, Linux Debian). 
It is still slow:  12 small strings/second on a 2 GHz machine.
But this is something to start with.

  I was able to get your example to work
 with these functions, which correspond to open(2), read(2), write(2).

 I would also use these functions in C, as you did in your C program.
 Haskell I/O functions like hGetLine are analogous to C library I/O
 like fgets(3) - in particular, they're buffered, and I would guess
 that's why they don't work for you here.

Indeed. Initially, I tried  C - C,  and used  fgets, fputs, fflush.
And it did not work, it required to open/close files inside a loop;
I failed with attempts. Again, do not understand, why (do they wait
till the buffer is full?).

Then, I tried  read/write,  as it is in  fifoFromA.c  which I posted.
And it works.
Now,  Haskell - C  gives a hope. Nice.

Thanks,

--
Sergei
mech...@botik.ru



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to make asynchronous I/O composable and safe?

2012-01-14 Thread Daniel Waterworth
I've been trying to write networking code in Haskell too. I've also
come to the conclusion that channels are the way to go. However,
what's missing in the standard `Chan` type, which is essential for my
use-case, is the ability to do the equivalent of the unix select call.
My other slight qualm is that the type doesn't express the direction
of data (though this is easy to add afterwards).

I know of the chp package, but in order to learn how it worked, I
spent a day writing my own version. I've kept the API similar to that
of the standard Chan's. If it would be useful to you as well, I'll
happily open source it sooner rather than later,

Daniel

p.s I'd avoid the TChan for networking code as reading from a TChan is
a busy operation. [1]

[1] 
http://hackage.haskell.org/packages/archive/stm/2.2.0.1/doc/html/src/Control-Concurrent-STM-TChan.html#readTChan

On 14 January 2012 10:42, Joey Adams joeyadams3.14...@gmail.com wrote:
 On Sat, Jan 14, 2012 at 1:29 AM, Bardur Arantsson s...@scientician.net 
 wrote:
 So, the API becomes something like:

   runSocketServer :: ((Chan a, Chan b) - IO ()) - ... - IO ()

 where the first parameter contains the client logic and A is the type of
 the messages from the client and B is the type of the messages which are
 sent back to the client.

 Thanks, that's a good idea.  Even if I only plan to receive in one
 thread, placing the messages in a Chan or TChan helps separate my
 application thread from the complexities of connection management.

 Is there something on Hackage that will do this for me?  Or will I
 need to roll my own?  Namely, convert a network connection to a pair
 of channels, and close the connection automatically.  Something like
 this:

    -- | Spawn two threads, one which populates the first channel with messages
    -- from the other host, and another which reads the second channel and 
 sends
    -- its messages to the other host.
    --
    -- Run the given computation, passing it these channels.  When the
 computation
    -- completes (or throws an exception), sending and receiving will
 stop, and the
    -- connection will be closed.
    --
    -- If either the receiving thread or sending thread encounter an exception,
    -- sending and receiving will stop, and an asynchronous exception will be
    -- thrown to your thread.
    channelize :: IO msg_in             -- ^ Receive callback
                   - (msg_out - IO ()     -- ^ Send callback
                   - IO ()                 -- ^ Close callback
                   - (TChan msg_in - TChan msg_out - IO a)
                                            -- ^ Inner computation
                   - IO a

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to make asynchronous I/O composable and safe?

2012-01-14 Thread Daniel Waterworth
Disregard that last comment on `TChan`s; retry blocks. you learn a new
thing every day [=

Daniel

On 14 January 2012 11:27, Daniel Waterworth da.waterwo...@gmail.com wrote:
 I've been trying to write networking code in Haskell too. I've also
 come to the conclusion that channels are the way to go. However,
 what's missing in the standard `Chan` type, which is essential for my
 use-case, is the ability to do the equivalent of the unix select call.
 My other slight qualm is that the type doesn't express the direction
 of data (though this is easy to add afterwards).

 I know of the chp package, but in order to learn how it worked, I
 spent a day writing my own version. I've kept the API similar to that
 of the standard Chan's. If it would be useful to you as well, I'll
 happily open source it sooner rather than later,

 Daniel

 p.s I'd avoid the TChan for networking code as reading from a TChan is
 a busy operation. [1]

 [1] 
 http://hackage.haskell.org/packages/archive/stm/2.2.0.1/doc/html/src/Control-Concurrent-STM-TChan.html#readTChan

 On 14 January 2012 10:42, Joey Adams joeyadams3.14...@gmail.com wrote:
 On Sat, Jan 14, 2012 at 1:29 AM, Bardur Arantsson s...@scientician.net 
 wrote:
 So, the API becomes something like:

   runSocketServer :: ((Chan a, Chan b) - IO ()) - ... - IO ()

 where the first parameter contains the client logic and A is the type of
 the messages from the client and B is the type of the messages which are
 sent back to the client.

 Thanks, that's a good idea.  Even if I only plan to receive in one
 thread, placing the messages in a Chan or TChan helps separate my
 application thread from the complexities of connection management.

 Is there something on Hackage that will do this for me?  Or will I
 need to roll my own?  Namely, convert a network connection to a pair
 of channels, and close the connection automatically.  Something like
 this:

    -- | Spawn two threads, one which populates the first channel with 
 messages
    -- from the other host, and another which reads the second channel and 
 sends
    -- its messages to the other host.
    --
    -- Run the given computation, passing it these channels.  When the
 computation
    -- completes (or throws an exception), sending and receiving will
 stop, and the
    -- connection will be closed.
    --
    -- If either the receiving thread or sending thread encounter an 
 exception,
    -- sending and receiving will stop, and an asynchronous exception will be
    -- thrown to your thread.
    channelize :: IO msg_in             -- ^ Receive callback
                   - (msg_out - IO ()     -- ^ Send callback
                   - IO ()                 -- ^ Close callback
                   - (TChan msg_in - TChan msg_out - IO a)
                                            -- ^ Inner computation
                   - IO a

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to make asynchronous I/O composable and safe?

2012-01-14 Thread Peter Simons
Hi guys,

  I'm not happy with asynchronous I/O in Haskell.  It's hard to reason
  about, and doesn't compose well.
 
  Async I/O *is* tricky if you're expecting threads to do their own
  writes/reads directly to/from sockets. I find that using a
  message-passing approach for communication makes this much easier.

yes, that is true. I've always felt that spreading IO code all over the
software is a choice that makes the programmers live unnecessarily hard.
The (IMHO superior) alternative is to have one central IO loop that
generates buffers of input, passes them to callback a function, and
receives buffers of output in response.

I have attached a short module that implements the following function:

  type ByteCount= Word16
  type Capacity = Word16
  data Buffer   = Buf !Capacity !(Ptr Word8) !ByteCount
  type BlockHandler st  = Buffer - st - IO (Buffer, st)

  runLoop :: ReadHandle - Capacity - BlockHandler st - st - IO st

That setup is ideal for implementing streaming services, where there is
only one connection on which some kind of dialog between client/server
takes place, i.e. an HTTP server.

Programs like Bittorrent, on the other hand, are much harder to design,
because there's a great number of seemingly individual I/O contexts
(i.e. the machine is talking to hundreds, or even thousands of other
machines), but all those communications need to be coordinated in one
way or another.

A solution for that problem invariably ends up looking like a massive
finite state machine, which is somewhat unpleasant.

Take care,
Peter



{-# LANGUAGE DeriveDataTypeable #-}
{- |
   Module  :  BlockIO
   License :  BSD3

   Maintainer  :  sim...@cryp.to
   Stability   :  provisional
   Portability :  DeriveDataTypeable

   'runLoop' drives a 'BlockHandler' with data read from the
   input stream until 'hIsEOF' ensues. Everything else has
   to be done by the callback; runLoop just does the I\/O.
   But it does it /fast/.
-}

module BlockIO where

import Prelude hiding ( catch, rem )
import Control.Exception
import Control.Monad.State
import Data.List
import Data.Typeable
import System.IO
import System.IO.Error hiding ( catch )
import Foreign  hiding ( new )
import System.Timeout

-- * Static Buffer I\/O

type ReadHandle  = Handle
type WriteHandle = Handle

type ByteCount = Word16
type Capacity  = Word16
data Buffer= Buf !Capacity !(Ptr Word8) !ByteCount
 deriving (Eq, Show, Typeable)

-- |Run the given computation with an initialized, empty
-- 'Buffer'. The buffer is gone when the computation
-- returns.

withBuffer :: Capacity - (Buffer - IO a) - IO a
withBuffer 0 = fail BlockIO.withBuffer with size 0 doesn't make sense
withBuffer n = bracket cons dest
  where
  cons = mallocArray (fromIntegral n) = \p - return (Buf n p 0)
  dest (Buf _ p _) = free p

-- |Drop the first @n = size@ octets from the buffer.

flush :: ByteCount - Buffer - IO Buffer
flush 0 buf   = return buf
flush n (Buf cap ptr len) = assert (n = len) $ do
  let ptr' = ptr `plusPtr` fromIntegral n
  len' = fromIntegral len - fromIntegral n
  when (len'  0) (copyArray ptr ptr' len')
  return (Buf cap ptr (fromIntegral len'))

type Timeout = Int

-- |If there is space, read and append more octets; then
-- return the modified buffer. In case of 'hIsEOF',
-- 'Nothing' is returned. If the buffer is full already,
-- 'throwDyn' a 'BufferOverflow' exception. When the timeout
-- exceeds, 'ReadTimeout' is thrown.

slurp :: Timeout - ReadHandle - Buffer - IO (Maybe Buffer)
slurp to h b@(Buf cap ptr len) = do
  when (cap = len) (throw (BufferOverflow h b))
  timeout to (handleEOF wrap) =
maybe (throw (ReadTimeout to h b)) return
  where
  wrap = do let ptr' = ptr `plusPtr` fromIntegral len
n= cap - len
rc - hGetBufNonBlocking h ptr' (fromIntegral n)
if rc  0
   then return (Buf cap ptr (len + fromIntegral rc))
   else hWaitForInput h (-1)  wrap

-- * BlockHandler and I\/O Driver

-- |A callback function suitable for use with 'runLoop'
-- takes a buffer and a state, then returns a modified
-- buffer and a modified state. Usually the callback will
-- use 'slurp' to remove data it has processed already.

type BlockHandler st = Buffer - st - IO (Buffer, st)

type ExceptionHandler st e = e - st - IO st

-- |Our main I\/O driver.

runLoopNB
  :: (st - Timeout)-- ^ user state provides timeout
  - (SomeException - st - IO st)   -- ^ user provides I\/O error handler
  - ReadHandle -- ^ the input source
  - Capacity   -- ^ I\/O buffer size
  - BlockHandler st-- ^ callback
  - st -- ^ initial callback state
  - IO st  -- ^ return final callback state
runLoopNB mkTO errH hIn cap f initST = withBuffer cap (`ioloop` initST)
  where
  ioloop buf st = buf `seq` st `seq`
handle (`errH` st) $ do
  rc - slurp (mkTO st) hIn buf
  

Re: [Haskell-cafe] How to make asynchronous I/O composable and safe?

2012-01-14 Thread Bardur Arantsson

On 01/14/2012 11:42 AM, Joey Adams wrote:

On Sat, Jan 14, 2012 at 1:29 AM, Bardur Arantssons...@scientician.net  wrote:

So, the API becomes something like:

   runSocketServer :: ((Chan a, Chan b) -  IO ()) -  ... -  IO ()

where the first parameter contains the client logic and A is the type of
the messages from the client and B is the type of the messages which are
sent back to the client.


Thanks, that's a good idea.  Even if I only plan to receive in one
thread, placing the messages in a Chan or TChan helps separate my
application thread from the complexities of connection management.



Unless TCP is an absolute requirement, something like 0MQ[1,2] may be 
worth investigating.


It handles all the nasty details and you get a simple message-based 
interface with lots of nice things like pub-sub, request-reply, etc. etc.


[1] http://hackage.haskell.org/package/zeromq-haskell-0.8.2
[2] http://www.zeromq.org/



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to make asynchronous I/O composable and safe?

2012-01-14 Thread Peter Simons
Hi Daniel,

  I've been trying to write networking code in Haskell too. I've also
  come to the conclusion that channels are the way to go.

isn't a tuple of input/output channels essentially the same as a stream
processor arrow? I found the example discussed in the arrow paper [1]
very enlightening in that regard. There also is a Haskell module that
extends the SP type to support monadic IO at [2].

Take care,
Peter


[1] 
http://www.ittc.ku.edu/Projects/SLDG/filing_cabinet/Hughes_Generalizing_Monads_to_Arrows.pdf
[2] http://hackage.haskell.org/package/streamproc


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monad-control rant

2012-01-14 Thread Mikhail Vorozhtsov

On 01/10/2012 11:12 PM, Edward Z. Yang wrote:

Excerpts from Mikhail Vorozhtsov's message of Tue Jan 10 09:54:38 -0500 2012:

On 01/10/2012 12:17 AM, Edward Z. Yang wrote:

Hello Mikhail,

Hi.


(Apologies for reviving a two month old thread). Have you put some thought into
whether or not these extra classes generalize in a way that is not /quite/ as
general as MonadBaseControl (so as to give you the power you need) but still
allow you to implement the functionality you are looking for? I'm not sure but
it seems something along the lines of unwind-protect ala Scheme might be
sufficient.

I'm not sure I'm following you. The problem with MonadBaseControl is
that it is /not/ general enough.


Sorry, I mispoke.  The sense you are using it is the more general a type class
is, the more instances you can write for it. I think the design goal I'm going
for here is, a single signature which covers MonadAbort/Recover/Finally in a
way that unifies them.  Which is not more general, except in the sense that it
contains more type classes (certainly not general in the mathematical sense.)
Hm, MonadAbort/Recover/Finally are independent (I made MonadAbort a 
superclass of MonadRecover purely for reasons of convenience). It's easy 
to imagine monads that have an instance of one of the classes but not of 
the others.



It assumes that you can eject/inject
all the stacked effects as a value of some data type. Which works fine
for the standard transformers because they are /implemented/ this way.
But not for monads that are implemented in operational style, as
interpreters, because the interpreter state cannot be internalized. This
particular implementation bias causes additional issues when the lifted
operation is not fully suited for ejecting/injecting. For example the
`Control.Exception.finally` (or unwind-protect), where we can neither
inject (at least properly) the effects into nor eject them from the
finalizer. That's why I think that the whole lift operations from the
bottom approach is wrong (the original goal was to lift
`Control.Exception`). The right way would be to capture the control
semantics of IO as a set of type classes[1] and then implement the
general versions of the operations you want to lift. That's what I tried
to do with the monad-abord-fd package.


I think this is generally a useful goal, since it helps define the semantics
of IO more sharply.  However, the exceptions mechanism is actually fairly
well specified, as far as semantics go, see A Semantics for Imprecise
Exceptions and Asynchronous Exceptions in Haskell.  So I'm not sure if
monad-abort-fd achieves the goal of expressing these interfaces, in
typeclass form, as well as allowing users to interoperate cleanly with
existing language support for these facilities.
I certainly didn't do that in any formal way. I was thinking something 
like this: if we identify the basic IO-specific control operations, 
abstract them but make sure they interact in the same way they do in IO, 
then any derivative IO control operation (implemented on top of the 
basic ones) could be lifted just by changing the type signature. The key 
words here are of course interact in the same way.



[1] Which turn out to be quite general: MonadAbort/Recover/Finally are
just a twist of MonadZero/MonadPlus


Now that's interesting! Is this an equivalence, e.g. MonadZero/MonadPlus
imply MonadAbort/Recover/Finally and vice-versa, or do you need to make
some slight modifications?  It seems that you somehow need support for
multiple zeros of the monad, as well as a way of looking at them.
Yes, something along those lines. MonadAbort is a generalization of 
MonadZero, MonadRecover is a specialization of the left catch version 
of MonadPlus (aka MonadOr). MonadFinally is about adopting


finally0 m f = do
  r ← m `morelse` (f Nothing  mzero)
  (r, ) $ f (Just r)

to the notion of failure associated with a particular monad.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to make asynchronous I/O composable and safe?

2012-01-14 Thread Daniel Waterworth
Hi Peter,

streamproc is a very interesting package, I'll surely use it somewhere
in the future. However, I'm not convinced that this solves my
immediate problem, but perhaps this is due to my inexperience with
arrows. My problem is:

I have a number of network connections and I have a system that does
things. I want the network connections to interact with the system. I
also want the system to be able to interact with the network
connections by way of a pub/sub style message bus.

The only way I can see stream processors working in this scenario is
if all of the events of the system are handled in a single thread. The
events are then pushed into the stream processor and actions are
pulled out. This isn't acceptable because the amount of logic in the
stream processor will be fairly small for my problem in comparison
with the logic that is required to mux/demux events/actions onto
sockets. It's also a problem that there's a single threaded
bottleneck.

Daniel

On 14 January 2012 11:58, Peter Simons sim...@cryp.to wrote:
 Hi Daniel,

   I've been trying to write networking code in Haskell too. I've also
   come to the conclusion that channels are the way to go.

 isn't a tuple of input/output channels essentially the same as a stream
 processor arrow? I found the example discussed in the arrow paper [1]
 very enlightening in that regard. There also is a Haskell module that
 extends the SP type to support monadic IO at [2].

 Take care,
 Peter


 [1] 
 http://www.ittc.ku.edu/Projects/SLDG/filing_cabinet/Hughes_Generalizing_Monads_to_Arrows.pdf
 [2] http://hackage.haskell.org/package/streamproc


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] STM atomic blocks in IO functions

2012-01-14 Thread Ketil Malde
Bryan O'Sullivan b...@serpentine.com writes:

 The question is a simple one. Must all operations on a TVar happen
 within *the same* atomically block, or am I am I guaranteed thread
 safety if, say, I have a number of atomically blocks in an IO
 function.

 If you want successive operations to see a consistent state, they must
 occur in the same atomically block.

I'm not sure I understand the question, nor the answer?  I thought the
idea was that state should be consistent on the entry and exit of each
atomically block.  So you can break your program into multiple
transactions, but each transaction should be a semantically complete
unit.

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] named pipe interface

2012-01-14 Thread Donn Cave
Quoth Serge D. Mechveliani mech...@botik.ru,

 By  openFile  you, probably, mean  openFd.

Yes, sorry!

 Another point is the number of open files, for a long loop.
...
   toA_IO   = openFd toA   WriteOnly Nothing defaultFileFlags
...
 When applying  axiomIO  in a loop of 9000 strings, it breaks:
 too many open files.
 I do not understand why it is so, because  toA_IO and fromA_IO  are 
 global constants (I have not any experience with `do').

toA_IO is a global constant of type IO Fd, not Fd.  You now see
the importance of this distinction - the action actually transpires at
toA - toA_IO, and each time that executes, you get a new file
descriptor.

 Anyway, I have changed this to

   toA   = unsafePerformIO toA_IO
   fromA = unsafePerformIO fromA_IO
...
 (I need to understand further whether my usage of  unsafePerformIO  
 really damages the project).

It's actually similar to the way some libraries initialize global
values, but there are some additional complexities and it isn't
clear to me that it's all guaranteed to work anyway.  You can read
much more about this here:
 http://www.haskell.org/haskellwiki/Top_level_mutable_state
I'm no expert in this, but they're sure here on haskell-cafe, so
if you want to take this up, you might start a new topic here,
something like global initialization with unsafePerformIO,
describe what you're doing and explain why you can't just pass
the open file descriptors as function parameters.

...
 Indeed. Initially, I tried  C - C,  and used  fgets, fputs, fflush.
 And it did not work, it required to open/close files inside a loop;
 I failed with attempts. Again, do not understand, why (do they wait
 till the buffer is full?).

I don't know.  When I was younger, I used to track these problems down
and try to explain in detail why buffered I/O is a bad bet with pipes,
sockets etc.  I don't think anyone listened.  I think I am going to
experiment with I am old, so listen to me and see if it works any better.

Donn

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] named pipe interface

2012-01-14 Thread Brandon Allbery
On Sat, Jan 14, 2012 at 11:57, Donn Cave d...@avvanta.com wrote:

 I don't know.  When I was younger, I used to track these problems down
 and try to explain in detail why buffered I/O is a bad bet with pipes,
 sockets etc.  I don't think anyone listened.  I think I am going to
 experiment with I am old, so listen to me and see if it works any better.


If nothing else, you can transfer one of those explanations into the wiki
and point people to it when it comes up.  (Nobody pays attention to stuff
not directly affecting them.  This is annoying but somewhat understandable,
especially given that haskell-cafe tends toward information overload /
death by a thousand mathematical constructs :)

-- 
brandon s allbery  allber...@gmail.com
wandering unix systems administrator (available) (412) 475-9364 vm/sms
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] STM atomic blocks in IO functions

2012-01-14 Thread Steffen Schuldenzucker

On 01/14/2012 03:55 PM, Ketil Malde wrote:

Bryan O'Sullivanb...@serpentine.com  writes:


The question is a simple one. Must all operations on a TVar happen
within *the same* atomically block, or am I am I guaranteed thread
safety if, say, I have a number of atomically blocks in an IO
function.



If you want successive operations to see a consistent state, they must
occur in the same atomically block.


I'm not sure I understand the question, nor the answer?  I thought the
idea was that state should be consistent on the entry and exit of each
atomically block.  So you can break your program into multiple
transactions, but each transaction should be a semantically complete
unit.


I think consistent state here means that you can be sure no other 
thread has modified a, say, TVar, within the current 'atomically' block.


E.g. for MVars, you could /not/ be sure that

  void (takeMVar mvar)  putMVar mvar 5

won't block if mvar is full at the beginning, because a different thread 
might put to mvar between the two actions. However, in


  atomically $ void (takeTVar tvar)  putTVar tvar 5

, this is not possible, the action after 'atomically' won't be 
influenced by any other threads while it's running, hence the name.


-- Steffen

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] STM atomic blocks in IO functions

2012-01-14 Thread Rob Stewart
On 14 January 2012 18:05, Steffen Schuldenzucker
sschuldenzuc...@uni-bonn.de wrote:

 I think consistent state here means that you can be sure no other thread
 has modified a, say, TVar, within the current 'atomically' block.

OK, well take a modified example, where I am wanting to call an IO
function within an atomically block:
---
import Control.Concurrent.STM
import Control.Monad.IO.Class (liftIO)

addThree :: TVar Int - Int - STM ()
addThree t = do
  i - liftIO three -- Problem line
  ls - readTVar t
  writeTVar t (ls + i)

three :: IO Int
three = return 3

main :: IO ()
main = do
  val - atomically $ do
   tvar - newTVar 0
   addThree tvar
   readTVar tvar
  putStrLn $ Value:  ++ show val
---

Are IO functions permissible in STM atomically blocks? If so how? If
not, how would one get around a problem of having to use an IO
function to retrieve a value that is to be written to a TVar ?

--
Rob

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] STM atomic blocks in IO functions

2012-01-14 Thread Daniel Waterworth
On 14 January 2012 19:24, Rob Stewart robstewar...@googlemail.com wrote:
 On 14 January 2012 18:05, Steffen Schuldenzucker
 sschuldenzuc...@uni-bonn.de wrote:

 I think consistent state here means that you can be sure no other thread
 has modified a, say, TVar, within the current 'atomically' block.

 OK, well take a modified example, where I am wanting to call an IO
 function within an atomically block:
 ---
 import Control.Concurrent.STM
 import Control.Monad.IO.Class (liftIO)

 addThree :: TVar Int - Int - STM ()
 addThree t = do
  i - liftIO three -- Problem line
  ls - readTVar t
  writeTVar t (ls + i)

 three :: IO Int
 three = return 3

 main :: IO ()
 main = do
  val - atomically $ do
   tvar - newTVar 0
   addThree tvar
   readTVar tvar
  putStrLn $ Value:  ++ show val
 ---

 Are IO functions permissible in STM atomically blocks? If so how? If
 not, how would one get around a problem of having to use an IO
 function to retrieve a value that is to be written to a TVar ?

 --
 Rob

No that's not possible. An STM transaction may be tried several times
so allowing IO doesn't make sense. Instead you pass any values that
you need into the transaction. e.g.

line - getLine
atomically $ do
  writeTVar v line
  ...

Daniel

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] decoupling type classes

2012-01-14 Thread Yin Wang
On Sat, Jan 14, 2012 at 2:38 PM, Dominique Devriese
dominique.devri...@cs.kuleuven.be wrote:
 I may or may not have thought about it. Maybe you can give an example
 of parametric instances where there could be problems, so that I can
 figure out whether my system works on the example or not.

 The typical example would be

 instance Eq a = Eq [a] where
  [] == [] = True
  (a : as) == (b : bs) = a == b  as == bs
  _ == _ = False

It can handle this case, although it doesn't handle it as a parametric
instance. I suspect that we don't need the concept of parameter
instances at all. We just searches for instances recursively at the
call site:

1. If g has an implicit parameter f, search for values which
matches the name and instantiated type in the current scope.

2. If a value is found, use it as the argument.

3. Check if the value is a function with implicit parameters, if so,
search for values that matches the name and type of the implicit
parameters.

4. Do this recursively until no more arguments contain implicit parameters.


 This coupling you talk about is not actually there for instance
 arguments. Instance arguments are perfectly usable without records.
 There is some special support for automatically constructing record
 projections with instance arguments though.

Cool. So it seems to be close to what I had in mind.


 I am not sure about the exact workings of your system, but I want to
 point out that alternative choices can be made about the workings of
 inferencing and resolving type-class instances such that local
 instances can be allowed. For example, in Agda, we do not infer
 instance arguments and we give an error in case of ambiguity, but
 because of this, we can allow local instances...

Certainly it should report error when there are ambiguities, but
sometimes it should report an error even there is only one value that
matches the name and type. For example,

foo x =
  let overload bar (x:Int) = x + 1
  in \() - bar x


baz =
 in foo (1::Int)

Even if we have only one definition of bar in the program, we should
not resolve it to the definition of bar inside foo. Because that
bar is not visible at the call site foo (1::int). We should report
an error in this case. Think of bar as a typed dynamically scoped
variable helps to justify this decision.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to make asynchronous I/O composable and safe?

2012-01-14 Thread wren ng thornton

On 1/14/12 6:27 AM, Daniel Waterworth wrote:

p.s I'd avoid the TChan for networking code as reading from a TChan is
a busy operation. [1]

[1] 
http://hackage.haskell.org/packages/archive/stm/2.2.0.1/doc/html/src/Control-Concurrent-STM-TChan.html#readTChan



The `retry`-ness will be rectified whenever the new version of stm is 
pushed out[1], which includes tryReadTChan for one-shot use. Until then, 
you can use the version of tryReadTChan in stm-chans[2] which provides 
the same operation, though less optimized since it's not behind the API 
wall. Once I learn the version number of when the optimized variants 
will be released, the stm-chans version will use CPP to properly select 
between the new version vs the backport, so you can rely on stm-chans to 
provide a compatibility layer for those operations.



[1] http://www.haskell.org/pipermail/cvs-libraries/2011-April/012914.html

[2] 
http://hackage.haskell.org/packages/archive/stm-chans/1.1.0/doc/html/src/Control-Concurrent-STM-TChan-Compat.html


--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] STM atomic blocks in IO functions

2012-01-14 Thread wren ng thornton

On 1/14/12 2:24 PM, Rob Stewart wrote:

Are IO functions permissible in STM atomically blocks?


They are not. The semantics of STM are that each transaction is retried 
until it succeeds, and that the number of times it is retried does not 
affect the program output. Thus, you can only do things in STM which can 
be reverted, since you may have to undo the side-effects whenever the 
transaction is retried.


However, if you're interested in pursuing this approach, you should take 
a look at TwilightSTM which expands the interaction possibilities 
between IO and STM.



If so how? If
not, how would one get around a problem of having to use an IO
function to retrieve a value that is to be written to a TVar ?


If you truly must do IO in the middle of a transaction, the typical 
solution is to use a locking mechanism. For example, you use a TMVar() 
as the lock: taking the () token in order to prevent other threads from 
doing the IO; doing the IO; and then putting the () token back. Thus, 
something like:


do  ...
atomically $ do
...
() - takeTMVar lockRunFoo
x - runFoo
atomically $ do
putTMVar lockRunFoo ()
...x...
...

However, it's often possible to factor the IO out of the original 
transaction, so you should do so whenever you can. An unfortunate 
downside of the above locking hack is that the STM state is not 
guaranteed to be consistent across the two transactions. You can fake 
read-consistency by moving reads into the first transaction in order to 
bind the values to local variables, as in:


do  ...
(a,b,c) - atomically $ do
...
a - ...
...
b - ...
...
c - ...
...
() - takeTMVar lockRunFoo
return (a,b,c)
x - runFoo
atomically $ do
putTMVar lockRunFoo ()
...x...a...b...c...
...

And you can fake write-consistency by moving writes into the second 
transaction to ensure that they all are committed at once. However, you 
can't use those tricks if you have a complicated back and forth with 
reading and writing.


--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Haskell Propaganda needed

2012-01-14 Thread Victor Miller
I'm a research mathematician at a research institute with a bunch of other
mathematicians (a number of whom are also expert programmers).  I recently
(starting three months ago) have been learning Haskell.  I'm going to give
a talk to the staff about it.  Most of the audience are pretty experienced
programmers in C/C+/Python, but with little or no exposure to functional
languages.  I'm looking for talks from which I can cannibalize good selling
points.  I was led to Haskell by a somewhat circuitous route: at our place,
as with most of the world, parallel programs (especially using GPUs) are
becoming more important. A little googling lead me a few interesting
projects on automatic mapping computations to GPUs, all of which were based
on Haskell.   I feel that this will be the way to go.  There's one guy on
the staff who's a demon programmer: if someone needs something to be
adapted to GPUs they go to him.  Unfortunately I find reading his code
rather difficult -- it's rather baroque and opaque.  Thus, I'd like
something more high level, and something amenable to optimization
algorithms.

In my former life I worked at IBM research on one of the leading edge
compiler optimization projects, and learned to appreciate the need for
clear semantics in programs, not just for developing correct programs, but
also to allow really aggressive optimizations to be performed.  This is
another reason that I'm interested in functional languages.

I know that I'll get peppered with questions about efficiency.  We (our
staff) is interested in *very* large scale computations which must use the
resources as efficiently as possible.  One of our staff members also opined
that he felt that a lazy language like Haskell wouldn't be acceptable,
since it was impossible (or extremely difficult) to predict the storage use
of such a program.

So, any suggestions are welcome.

Victor
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] parse error in pattern, and byte code interpreter

2012-01-14 Thread TP
Hi everybody,

I want to test a higher level language than C/C++ (type inference, garbage 
collection), but much faster than Python. So I am naturally led to Haskell or 
OCaml.

I have read and tested tutorials for the two languages. For the time being, my 
preference goes to Haskell, first because it looks more like Python, and also 
because some things appear not so clean in OCaml, at least for a beginner (Put 
a line termination or not? Why have I to put rec in the definition of a 
recursive function? Etc.).

I have two questions.

1/ Inspiring from tutorials out there, I have tried to write a small formal 
program which is able to distribute n*(x+y) to n*x+n*y. The OCaml version is 
working (see Post Scriptum). However, I have difficulties to make the Haskell 
version work. This is my code:

{}
data Expr = Plus Expr Expr
  | Minus Expr Expr
  | Times Expr Expr
  | Divide Expr Expr
  | Variable String
deriving ( Show, Eq )

expr_to_string expr = case expr of
Times expr1 expr2 - ( ++ ( expr_to_string expr1 ) ++  * 
++ ( expr_to_string expr2 ) ++ )
Plus expr1 expr2 - ( ++ ( expr_to_string expr1 ) ++  + 
++ ( expr_to_string expr2 ) ++ )
Variable var - var

distribute expr = case expr of
 Variable var - var
 Times expr1 Plus( expr2 expr3 ) -
 Plus ( Times ( expr1 expr2 ) Times ( expr1 expr3 ) )

main = do
let x = Times ( Variable n )
( Plus ( Variable x ) ( Variable y ) )
print x
print ( expr_to_string x )
{}

When I try to run this code with runghc, I obtain:

pattern_matching_example.hs:28:24: Parse error in pattern: expr2

Thus it does not like my pattern Times expr1 Plus( expr2 expr3 ). Why?
How can I obtain the right result, as with the OCaml code below?

2/ It seems there is no possibility to generate bytecode, contrary to OCaml. 
Is it correct? Is there an alternative?
What is interesting with bytecode run with ocamlrun is that the process of 
generating the bytecode is very fast, so it is very convenient to test the 
program being written, in an efficient workflow. Only at the end the program is 
compiled to get more execution speed.

Thanks a lot in advance.

TP

PS:
---
To test the OCaml tutorial, type:
$ ocamlc -o pattern_matching_example pattern_matching_example.ml
$ ocamlrun ./pattern_matching_example

(*)
(* from OCaml tutorial, section 'data_types_and_matching.html' *)

(* This is a binary tree *)
type expr = Plus of expr * expr
  | Minus of expr * expr
  | Times of expr * expr
  | Divide of expr * expr
  | Value of string
;;

let v = Times ( Value n, Plus (Value x, Value y) )

let rec to_string e =
match e with
Plus ( left, right ) - ( ^ (to_string left ) ^  +  ^ (to_string 
right) ^ )
  | Minus ( left, right ) - ( ^ (to_string left ) ^  -  ^ (to_string 
right) ^ )
  | Times ( left, right ) - ( ^ (to_string left ) ^  *  ^ (to_string 
right) ^ )
  | Divide ( left, right ) - ( ^ (to_string left ) ^  /  ^ (to_string 
right) ^ )
  | Value value - value
;;

(* by type inference, ocaml knows that e is of type expr just below *)
let print_expr e = print_endline ( to_string e );;

print_expr v;;

let rec distribute e =
match e with
Times ( e1, Plus( e2, e3 ) ) -
Plus (Times ( distribute e1, distribute e2 )
, Times ( distribute e1, distribute e3 ) )
  | Times ( Plus( e1, e2 ), e3 ) -
Plus (Times ( distribute e1, distribute e3 )
, Times ( distribute e2, distribute e3 ) )
  | Plus ( left, right ) - Plus ( distribute left, distribute right )
  | Minus ( left, right ) - Minus ( distribute left, distribute right )
  | Times ( left, right ) - Times ( distribute left, distribute right )
  | Divide ( left, right ) - Divide ( distribute left, distribute right )
  | Value v - Value v
;;

print_expr ( distribute v );;
(*)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell Propaganda needed

2012-01-14 Thread Don Stewart
Hey Victor,

Thankfully, there's lots of material and experience reports available,
along with code, for the Haskell+science use case.

In my view Haskell works well as a coordination language, for
organizing computation at a high level, and thanks to its excellent
compiler and runtime, also works well for a parallel, node-level
computation. It is also fairly commonly used as a language for
generating high performance kernels thanks to EDSL support.

Thanks to the rather excellent foreign function interface, its also
trivial to interact with C (or Fortran or C++) to do number crunching,
or use other non-Haskell libraries.

Another way  to view it is the conciseness and ease of development of
Python,  with compiled, optimized code approaching C++ or C, but with
excellent parallel tools and libraries in a class of their own.

Some random links:

* The Parallel Haskell Project -
http://www.haskell.org/haskellwiki/Parallel_GHC_Project - an effort to
build parallel Haskell systems in large-scale projects, across a range
of industries. ( Six organizations participating currently,
http://www.haskell.org/haskellwiki/Parallel_GHC_Project#Participating_organisations)

* Parallel and Concurrent Programming in Haskell, a tutorial by Simon
Marlow, http://community.haskell.org/~simonmar/par-tutorial.pdf

* 11 reasons to use Haskell as a mathematician ,
http://blog.sigfpe.com/2006/01/eleven-reasons-to-use-haskell-as.html

* Math libraries on Hackage,
http://hackage.haskell.org/packages/archive/pkg-list.html#cat:math ,
including e.g. statically typed vector, cleverly optimized array
packages, and many others.

* a collection of links about parallel and concurrent Haskell,
http://stackoverflow.com/questions/3063652/whats-the-status-of-multicore-programming-in-haskell/3063668#3063668

* anything on the well-typed blog, http://www.well-typed.com/blog/


It's important to note that many of the high performance or
parallel-oriented libraries in Haskell use laziness or strictness very
carefully. Sometimes strictness is necessary for controlling e.g.
layout (see e.g. the Repa parallel arrays library:
http://www.haskell.org/haskellwiki/Numeric_Haskell%3a_A_Repa_Tutorial)
while sometimes laziness is essential (for minimizing work done in
critical sections inside locks).


Cheers,
   Don

On Sat, Jan 14, 2012 at 5:33 PM, Victor Miller victorsmil...@gmail.com wrote:
 I'm a research mathematician at a research institute with a bunch of other
 mathematicians (a number of whom are also expert programmers).  I recently
 (starting three months ago) have been learning Haskell.  I'm going to give a
 talk to the staff about it.  Most of the audience are pretty experienced
 programmers in C/C+/Python, but with little or no exposure to functional
 languages.  I'm looking for talks from which I can cannibalize good selling
 points.  I was led to Haskell by a somewhat circuitous route: at our place,
 as with most of the world, parallel programs (especially using GPUs) are
 becoming more important. A little googling lead me a few interesting
 projects on automatic mapping computations to GPUs, all of which were based
 on Haskell.   I feel that this will be the way to go.  There's one guy on
 the staff who's a demon programmer: if someone needs something to be adapted
 to GPUs they go to him.  Unfortunately I find reading his code rather
 difficult -- it's rather baroque and opaque.  Thus, I'd like something more
 high level, and something amenable to optimization algorithms.

 In my former life I worked at IBM research on one of the leading edge
 compiler optimization projects, and learned to appreciate the need for clear
 semantics in programs, not just for developing correct programs, but also to
 allow really aggressive optimizations to be performed.  This is another
 reason that I'm interested in functional languages.

 I know that I'll get peppered with questions about efficiency.  We (our
 staff) is interested in *very* large scale computations which must use the
 resources as efficiently as possible.  One of our staff members also opined
 that he felt that a lazy language like Haskell wouldn't be acceptable, since
 it was impossible (or extremely difficult) to predict the storage use of
 such a program.

 So, any suggestions are welcome.

 Victor

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] parse error in pattern, and byte code interpreter

2012-01-14 Thread Brandon Allbery
On Sat, Jan 14, 2012 at 18:18, TP paratribulati...@free.fr wrote:

 Times expr1 Plus( expr2 expr3 ) -


OCaml pattern syntax is not the same as Haskell pattern syntax.  The
correct way to write that pattern is

Times expr1 (Plus expr2 expr3)

This is consistent with Haskell not using parentheses for function
parameters, since all calls are curried.

-- 
brandon s allbery  allber...@gmail.com
wandering unix systems administrator (available) (412) 475-9364 vm/sms
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Is it possible to create an instance of MonadBaseControl for PropertyM (QuickCheck)/continuations?

2012-01-14 Thread Oliver Charles
Hi

I'm working on a little experiment at the moment: using monadic
QuickCheck to test the integration of my code and the database. I see
some of my functions having properties like - given a database in this
state, selectAll should return all rows, and so on. My initial attempt
has worked nicely, and now I'm trying to test some more complicated
properties, but I'm hitting a problem with overlapping primary keys, and
this is because I'm not correctly cleaning up after each check.

The simplest solution to this is to bracket property itself, and for
that I turned to Control.Monad.Trans.Control in lifted-base, but I am
struggling to actually write an instance for
MonadBaseContol IO (PropertyM IO). It seems that PropertyM is a
continuation monad transformer:

newtype PropertyM m a =
  MkPropertyM { unPropertyM :: (a - Gen (m Property)) - Gen (m Property) }

Given this, is it possible to even write an instance of
MonadBaseControl? From the lifted-base documentation, it explicitly
calls out ConT as *not* having instances, so I wonder if it can't be
done. Sadly, I also lack intuition as to what MonadBaseControl really
means - I think it means 'capture the state of this monad, so later we
can go back to exactly the same state (or sequence of actions?)', but
this is very flakey.

So... is this possible, and if so how can I move forward?

Thanks for any help or advice!
- Ollie

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] generating parens for pretty-printing code in haskell-src-exts

2012-01-14 Thread Conal Elliott
I'm using haskell-src-exts together with SYB for a code-rewriting project,
and I'm having difficulty with parenthesization. I naïvely expected that
parentheses would be absent from the abstract syntax, being removed during
parsing and inserted during pretty-printing. It's easy for me to remove
them myself, but harder to add them (minimally) after transformation.
Rather than re-inventing the wheel, I thought I'd ask for help.

Has anyone written automatic minimal parens insertion for haskell-src-exts?

-- Conal
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] generating parens for pretty-printing code in haskell-src-exts

2012-01-14 Thread Stephen Tetley
Hi Conal

I don't know if any Haskell src-exts code exists. Norman Ramsey has
published an algorithm for it, plus ML code:

http://www.cs.tufts.edu/~nr/pubs/unparse-abstract.html

I've transcribed the code to Haskell a couple of times for small
expression languages. As far as I remember you need to know what
constructors you are working with so it can't be put in a generic
pretty print library, but the constructor specific code is
stereotypical so it should be easy (if boring) to write for Haskell
src-exts.

Best wishes

Stephen

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe