Re: [Haskell-cafe] Announce: EnumMap-0.0.1

2009-08-13 Thread Ketil Malde
David Menendez d...@zednenem.com writes:

 That depends on what outside the Enum range means. You'll get an
 exception if you somehow get an Int key in the map which doesn't
 correspond to any value in the enum, but you don't get an exception if
 you try to pass in, say, a large Integer.

 Prelude fromEnum (2^32)
 0

Yes, but:

  Prelude Data.Int fromEnum (2^32 :: Int64)
  *** Exception: Enum.fromEnum{Int64}: value (4294967296) is outside of Int's 
bounds (-2147483648,2147483647)

so apparently, different Enum instances deal with this differently.

From GHC.Num:

instance Enum Integer where
[...]
fromEnum n   = I# (toInt# n)

From GHC.Int:

instance Enum Int64 where
[...]
fromEnum x@(I64# x#)
| x = fromIntegral (minBound::Int)  x = fromIntegral 
(maxBound::Int)
= I# (int64ToInt# x#)
| otherwise = fromEnumError Int64 x

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] DDC compiler and effects

2009-08-13 Thread oleg

Combining non-strict evaluation and even laziness (non-strictness plus
sharing) with effects is of course possible. That is the topic of the
ICFP09 paper of Sebastian Fischer et al., which describes the
combination of laziness with non-determinism. The ideas of the paper
have been implemented in a Hackage package and in the embedded
probabilistic DSL Hansei. In the presence of effects, call-by-name is
no longer observationally equivalent to call-by-need: one has to be
explicit what is shared. Curry's call-time-choice seems a good choice:
variables always denote values rather than computations and can be
shared at will (of the run-time system). The language is still
non-strict, and computations whose results are not demanded are not
evaluated and none of their effects are performed.

Since any computable effect can be modeled by delimited control,
call-by-name shift/reset calculi provide general treatment of effects
and non-strictness. Call-by-name delimited control calculi have
been developed by Herbelin (POPL08). A different call-by-name
shift/reset calculus (containing call-by-value calculus as a proper
subset) is described in
http://okmij.org/ftp/Computation/Continuations.html#cbn-shift

The latter, cbn-xi, calculus includes effect sub-typing, which is
common in effect calculi. A call-by-name function whose argument type
is effectful can always be applied to a pure term. A pure term can be
regarded as potentially having any effect.  On the other hand, a
function whose argument type is pure can only be applied to a pure
term. The cbn-xi calculus also supports strict functions. Their
argument type is always a pure type and they can be applied to terms
regardless of their effect, because the effect will occur before the
application. Pure functions thus provide a measure of
effect-polymorphism. The cbn-xi and the other CBN calculi with
delimited control are deterministic.

The best application for the cbn-xi calculus I could find was in
linguistics. The calculus was used to represent various effects in
natural language: quantification, binding, raised and in-situ
wh-questions. Importantly, several effects -- several question words,
anaphora with quantification, superiority -- could be represented.


One may view ML and Haskell as occupying two ends of the extreme. ML
assumes any computation to be effectful and every function to have
side effects. Therefore, an ML compiler cannot do optimizations like
reordering (even apply commutative laws where exists), unless it can
examine the source code and prove that computations to reorder are
effect-free. That obviously all but precludes separate compilation;
the only aggressive optimizing ML compiler, MLton, is a whole-program
compiler.

Haskell, on the other hand, assumes every expression pure. Lots
algebraic properties become available and can be exploited, by
compilers and people. Alas, that precludes any effects.  Hence monads
were introduced, bringing back control dependencies disguised as a
data dependency. Monadic language forms a restrictive subset of Haskell
(for example, we can't use monadic expressions in guards, `if' and 
`case' statements). In the monadic sublanguage, Haskell swings to another
extreme and assumes, like ML, that everything is impure. Thus in an
expression,
do
x - e1
y - e2
...
the two binding cannot be commuted, even if e1 and e2 are both
'return'. If the compiler sees the source code of e1 and e2 and is
sufficiently smart to apply monadic laws, the compiler can determine
the two bindings can be reordered. However, if e1 and e2 are defined
in a separate module, I don't think that any existing compiler would
even think about reordering. 

Hopefully a system like DDC will find the middle ground.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Cleaner networking API - network-fancy

2009-08-13 Thread Taru Karttunen
Hello

network-fancy offers a cleaner API to networking facilities in
Haskell. It supports high-level operations on tcp, udp and unix
sockets. 

I would like some feedback on the API
http://hackage.haskell.org/packages/archive/network-fancy/0.1.4/doc/html/Network-Fancy.html

In particular:
* Does the type of the server function in dgramServer make sense?
  or would (packet - Address - (packet - IO ()) - IO ()) be
  better?
* Does the StringLike class make sense?
* Any other suggestions?

- Taru Karttunen
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: DDC compiler and effects; better than Haskell? (was Re: [Haskell-cafe] unsafeDestructiveAssign?)

2009-08-13 Thread Sebastian Sylvan
On Thu, Aug 13, 2009 at 4:56 AM, John A. De Goes j...@n-brain.net wrote:


 The next step is to distinguish between reading file A and reading file B,

between reading file A and writing file A, between reading one part of file
 A and writing another part of file A, etc. When the effect system can carry
 that kind of information, and not just for files, but network, memory, etc.,
 then you'll be able to do some extremely powerful parallelization 
 optimization.


 What if you have another program, written in C or something, that
monitors a file for changes, and if so changes the contents of another file?
Surely to catch that you must mark *all* file system access as
interefering? Even worse, another program could monitor the state of a
file and conditionally disable thet network driver, now file access
interferes with network access.


-- 
Sebastian Sylvan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: DDC compiler and effects; better than Haskell? (was Re: unsafeDestructiveAssign?)

2009-08-13 Thread Heinrich Apfelmus
John A. De Goes wrote:
 So what, because effect systems might not eliminate *all* boilerplate,
 you'd rather use boilerplate 100% of the time? :-)

The thing is that you still need  mapM  and friends for all those
effects (like non-determinism) that are not baked into the language.


Regards,
apfelmus

--
http://apfelmus.nfshost.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] containers and maps

2009-08-13 Thread John Lato
On 8/13/09, Jake McArthur jake.mcart...@gmail.com wrote:
 Jake McArthur wrote:

  The monoids package offers something similar to this:
 
     mapReduce :: (Generator c, Reducer e m) = (Elem c - e) - c - m
 
  If we take (Elem c) to be (item), (e) to be (item'), (c) to be (full), and
 (m) to be (full'), it's basically the same thing, and offers the same
 advantages as the ones you listed, as far as I can tell.
 

  Your example about uvector inspired me to try writing out the necessary
 instances for uvector:

     instance UA a = Monoid (UArr a) where
         mempty  = emptyU
         mappend = appendU

     instance UA a = Reducer a (UArr a) where
         unit = singletonU
         snoc = snocU
         cons = consU

     instance UA a = Generator (UArr a) where
         type Elem (UArr a) = a
         mapTo f = foldlU (\a - snoc a . f)


This looks to be essentially the same as the 'map' function in
ListLike, and suffers from the same problem.  It won't have the
performance characteristics of the native map functions.  Using e.g.
ByteStrings, you're recreating a ByteString by snoc'ing elements.

This might work with UVector (I intend to try it this evening); I
don't know how well the fusion framework will hold up in class
dictionaries.

Still, the monoids package is very powerful (and I'd completely
forgotten it).  Perhaps there's another approach that would work?

John
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: DDC compiler and effects; better than Haskell?

2009-08-13 Thread Heinrich Apfelmus
Russell O'Connor wrote:
 Peter Verswyvelen wrote:
 
 I kind of agree with the DDC authors here; in Haskell as soon as a
 function has a side effect, and you want to pass that function to a
 pure higher order function, you're stuck, you need to pick the monadic
 version of the higher order function, if it exists. So Haskell doesn't
 really solve the modularity problem, you need two versions of each
 higher order function really,
 
 Actually you need five versions: The pure version, the pre-order
 traversal, the post-order traversal, the in-order traversal, and the
 reverse in-order traversal.  And that is just looking at syntax.  If you
 care about your semantics you could potentially have more (or less).

Exactly! There is no unique choice for the order of effects when lifting
a pure function to an effectful one.

For instance, here two different versions of an effectful  map :

   mapM f [] = return []
   mapM f (x:xs) = do
   y  - f x
   ys - mapM f xs
   return (y:ys)

   mapM2 f [] = return []
   mapM2 f (x:xs) = do
   ys - mapM2 f xs
   y  - f x
   return (y:ys)

Which one will the DCC compiler chose, given

   map f [] = []
   map f (x:xs) = f x : map f xs

? Whenever I write a pure higher order function, I'd also have to
document the order of effects.


Regards,
apfelmus

--
http://apfelmus.nfshost.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: unsafeDestructiveAssign?

2009-08-13 Thread John Lato
  From: John Meacham j...@repetae.net
  Subject: Re: [Haskell-cafe] unsafeDestructiveAssign?
  On Wed, Aug 12, 2009 at 06:48:33PM +0100, John Lato wrote:
   On Wed, Aug 12, 2009 at 2:39 PM, Derek Elkinsderek.a.elk...@gmail.com 
 wrote:
   
(To Alberto as well.)
   
Unsurprisingly, Lennart stated the real issue, but I'll re-emphasize
it.  As much trouble as such a profound violation of purity would
cause, it's not the biggest problem.  If that were all, you could
easily write a C/assembly function that would do what was desired.
The real problem is that there isn't necessarily any data chunk at
all, i.e. there may not be anything to mutate.
   
  
   Lennart is right of course, but wouldn't his example just be a
   specific case of my argument?  That is, the compiler decided to
   evaluate the data at compile time by replacing it with the primitive
   value and inlining it?  It seems to me that in the absence of putting
   some scope or sequencing upon the mutating function, there's no way
   for such an unsafeMutate* to have any defined meaning in Haskell.  And
   once you have provided either scope or evaluation order, it would be
   equivalent to just using a monad (although not necessarily IO).


  The inherent problem is that 'heap locations' are not first class values in
  haskell. There is no way to refer to 'the location holding a variable',
  not so much because they don't exist, but because it is just
  inexpressible in haskell. for instance,

Thanks for this clear explanation.  The concept of an
unsafeDestructiveAssign function really caught my attention because it
seemed inexpressible.  This makes perfect sense, including an
explanation for mutable data in IO (because IORef, Ptr, and the like
make a data location a first class value).

I like to have a solid foundation for my nonsense.

Cheers,
John
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: DDC compiler and effects; better than Haskell?

2009-08-13 Thread Peter Verswyvelen
Well, in DDC I believe the order is left to right.

But you guys are right, many orders exist.

On the other hand, a language might offer primitives to convert
pure-to-effectfull functions no, in which you indicate the order you
want.

e.g. preOrder map

No?

(anyway Oleg's reply seems to give a definite answer to this thread no? :-)

On Thu, Aug 13, 2009 at 11:06 AM, Heinrich
Apfelmusapfel...@quantentunnel.de wrote:
 Russell O'Connor wrote:
 Peter Verswyvelen wrote:

 I kind of agree with the DDC authors here; in Haskell as soon as a
 function has a side effect, and you want to pass that function to a
 pure higher order function, you're stuck, you need to pick the monadic
 version of the higher order function, if it exists. So Haskell doesn't
 really solve the modularity problem, you need two versions of each
 higher order function really,

 Actually you need five versions: The pure version, the pre-order
 traversal, the post-order traversal, the in-order traversal, and the
 reverse in-order traversal.  And that is just looking at syntax.  If you
 care about your semantics you could potentially have more (or less).

 Exactly! There is no unique choice for the order of effects when lifting
 a pure function to an effectful one.

 For instance, here two different versions of an effectful  map :

   mapM f []     = return []
   mapM f (x:xs) = do
       y  - f x
       ys - mapM f xs
       return (y:ys)

   mapM2 f []     = return []
   mapM2 f (x:xs) = do
       ys - mapM2 f xs
       y  - f x
       return (y:ys)

 Which one will the DCC compiler chose, given

   map f []     = []
   map f (x:xs) = f x : map f xs

 ? Whenever I write a pure higher order function, I'd also have to
 document the order of effects.


 Regards,
 apfelmus

 --
 http://apfelmus.nfshost.com

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: DDC compiler and effects; better than Haskell?

2009-08-13 Thread Ben Lippmeier

Heinrich Apfelmus wrote:

Actually you need five versions: The pure version, the pre-order
traversal, the post-order traversal, the in-order traversal, and the
reverse in-order traversal.  And that is just looking at syntax.  If you
care about your semantics you could potentially have more (or less).



Exactly! There is no unique choice for the order of effects when lifting
a pure function to an effectful one.

For instance, here two different versions of an effectful  map :

   mapM f [] = return []
   mapM f (x:xs) = do
   y  - f x
   ys - mapM f xs
   return (y:ys)

   mapM2 f [] = return []
   mapM2 f (x:xs) = do
   ys - mapM2 f xs
   y  - f x
   return (y:ys)

Which one will the DCC compiler chose, given

   map f [] = []
   map f (x:xs) = f x : map f xs
  

Disciple uses default strict, left to right evaluation order. For
the above map function, if f has any effects they will be executed
in the same order as the list elements.


? Whenever I write a pure higher order function, I'd also have to
document the order of effects.
  


If you write a straight-up higher-order function like map above,
then it's neither pure or impure. Rather, it's polymorphic in the
effect of its argument function. When effect information is
added to the type of map it becomes:


map :: forall a b %r1 %r2 !e1
   .  (a -(!e1) b) - List %r1 a -(!e2) List %r2 b
   :- !e2 = !{ !Read %r1; !e1 }


Which says the effect of evaluating map is to read the list and
do whatever the argument function does. If the argument function
is pure, and the input list is constant, then the application
of map is pure, otherwise not.

If you want to define an always-pure version of map, which
only accepts pure argument functions then you can give it the
signature:

pureMap :: (a -(!e1) b) - List %r1 a - List %r2 b
   :- Pure !e1, Const %r1

.. and use the same definition as before.

Note that you don't have to specify the complete type in the
source language, only the bits you care about - the rest is
inferred.

Now if you try to pass pureMap an impure function, you get
an effect typing error.

Adding purity constraints allows you to write H.O functions
without committing to an evaluation order, so you can change
it later if desired.


Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: DDC compiler and effects; better than Haskell? (was Re: [Haskell-cafe] unsafeDestructiveAssign?)

2009-08-13 Thread Conor McBride

Hi Dan

On 12 Aug 2009, at 22:28, Dan Doel wrote:


On Wednesday 12 August 2009 10:12:14 am John A. De Goes wrote:

I think the point is that a functional language with a built-
in effect system that captures the nature of effects is pretty damn
cool and eliminates a lot of boilerplate.


It's definitely an interesting direction (possibly even the right  
one in the
long run), but it's not without its faults currently (unless things  
have

changed since I looked at it).


From what I've seen, I think we should applaud Ben for
kicking the open here.

Is Haskell over-committed to monads? Does Haskell make
a sufficient distinction between notions of value and
notions of computation in its type system?

For instance: what effects does disciple support? Mutation and IO?  
What if I
want non-determinism, or continuations, etc.? How do I as a user add  
those
effects to the effect system, and specify how they should interact  
with the
other effects? As far as I know, there aren't yet any provisions for  
this, so
presumably you'll end up with effect system for effects supported by  
the

compiler, and monads for effects you're writing yourself.

By contrast, monad transformers (for one) let you do the above  
defining of new
effects, and specifying how they interact (although they're  
certainly neither

perfect, nor completely satisfying theoretically).

Someone will probably eventually create (or already has, and I don't  
know
about it) an extensible effect system that would put this objection  
to rest.

Until then, you're dealing in trade offs.


It's still very much on the drawing board, but I once
flew a kite called Frank which tried to do something
of the sort (http://cs.ioc.ee/efftt/mcbride-slides.pdf).

Frank distinguishes value types from computation
types very much in the manner of Paul Blain Levy's
call-by-push-value. You make a computation type from
a value type v by attaching a capabilty to it (a
possibly empty set of interfaces which must be
enabled) [i_1,..i_n]v. You make a value type from a
computation type c by suspending it {c}. Expressions
are typed with value components matched up in the usual
way and capabilities checked for inclusion in the ambient
capability. That is, you don't need idiom-brackets
because you're always in them: it's just a question
of which idiom, as tracked by type.

There's a construct to extend the ambient idiom by
providing a listener for an interface (subtly
disguised, a homomorphism from the free monad on the
interface to the outer notion of computation).
Listeners can transform the value type of the
computations they interpret: a listener might offer
the throw capability to a computation of type t,
and deliver a pure computation of type Maybe t. But
[Throw]t and []Maybe t are distinguished, unlike
in Haskell. Moreover {[]t} and t are distinct: the
former is lazy, the latter strict, but there is no
reason why we should ever evaluate a pure thunk
more than once, even if it is forced repeatedly.

I agree with Oleg's remarks, elsewhere in this thread:
there is a middle ground to be found between ML and
Haskell. We should search with open minds.

All the best

Conor

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: DDC compiler and effects; better than Haskell? (was Re: [Haskell-cafe] unsafeDestructiveAssign?)

2009-08-13 Thread Jason Dusek
2009/08/12 John A. De Goes j...@n-brain.net:
 The next step is to distinguish between reading file A and
 reading file B, between reading file A and writing file A,
 between reading one part of file A and writing another part of
 file A, etc. When the effect system can carry that kind of
 information, and not just for files, but network, memory,
 etc., then you'll be able to do some extremely powerful
 parallelization  optimization.

  I am challenged to imagine optimizations that would be safe in
  the case of File I/O.

--
Jason Dusek
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Parallel Symbolic Computing Research Post

2009-08-13 Thread Trinder, Philip W
The next generation of HPC platform will provide teraflop performance
from 10^5 or more cores. A major challenge is to develop the programming
models and software infrastructures to effectively exploit these
architectures. In the HPC-GAP project (High Performance Computational
Algebra and Discrete Mathematics EPSRC EP/G055181) we aim to address
this challenge for the GAP Group Algebra package.

You will join a team of four researchers in an ambitious programme to
adapt the GAP computational algebra system to take advantage of modern
high performance computers. You will collaborate with other HPC-GAP
members at Aberdeen and St Andrews and Edinburgh Universities. You will
take part in and publish research using the software you develop.

You will have excellent parallel system engineering expertise, a degree
in computer science, an appropriate PhD or equivalent, and good teamwork
skills. Desirable criteria include functional language experience e.g.
Haskell, a track record of academic publications, experience preparing
research grant applications, and experience with a range of software
technologies.

You will start on 1 September 2009 and continue until the end of the
grant in August 2013.

Full details at
www.jobs.ac.uk/jobs/SP072/Research_Associate_-_Parallel_Symbolic_Computi
ng/


Phil Trinder
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS

E: p.w.trin...@hw.ac.uk
T: +44 (0)131 451 3435
F: +44 (0)131 451 3327
I: www.macs.hw.ac.uk/~trinder




-- 
Heriot-Watt University is a Scottish charity
registered under charity number SC000278.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: DDC compiler and effects; better than Haskell? (was Re: [Haskell-cafe] unsafeDestructiveAssign?)

2009-08-13 Thread Alberto G. Corona
Another issue in DCC is in the course of writing a procedure, is that either
the programmer has too much information (the list of effects of all the
called procedures  if they are explicit), or too little, if they are
generated and managed by the compiler.
How he knows for sure that a variable to be used in the next statement is
pure or it would be updated by the previous function call?.  if the list of
effects of a procedure is hidden or worse, contains a lot of information,
Isn`t this a problem?.

In contrast, the division of the world in pure/impure operations is
relaxing. Ok , after the  @ operator I´m sure that everithing is pure, but
things are not clear outside. At least in Haskell, trough monads, we have a
clear signature about the effects we are dealiing with. If IO can be
considered an effect.

Maybe, in Haskell, the coarse IO monad can be divided in smaller monads as
well, in the same but reverse way than DCC can handle the whole IO as a
single effect (I guess)?

2009/8/13 Jason Dusek jason.du...@gmail.com

 2009/08/12 John A. De Goes j...@n-brain.net:
  The next step is to distinguish between reading file A and
  reading file B, between reading file A and writing file A,
  between reading one part of file A and writing another part of
  file A, etc. When the effect system can carry that kind of
  information, and not just for files, but network, memory,
  etc., then you'll be able to do some extremely powerful
  parallelization  optimization.

   I am challenged to imagine optimizations that would be safe in
  the case of File I/O.

 --
 Jason Dusek
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cleaner networking API - network-fancy

2009-08-13 Thread Magnus Therning
On Thu, Aug 13, 2009 at 9:14 AM, Taru Karttunentar...@taruti.net wrote:
 Hello

 network-fancy offers a cleaner API to networking facilities in
 Haskell. It supports high-level operations on tcp, udp and unix
 sockets.

 I would like some feedback on the API
 http://hackage.haskell.org/packages/archive/network-fancy/0.1.4/doc/html/Network-Fancy.html

 In particular:
 * Does the type of the server function in dgramServer make sense?
  or would (packet - Address - (packet - IO ()) - IO ()) be
  better?
 * Does the StringLike class make sense?
 * Any other suggestions?

There is already a getHostName in Network.BSD, any reason for not using it?

/M

-- 
Magnus Therning(OpenPGP: 0xAB4DFBA4)
magnus@therning.org  Jabber: magnus@therning.org
http://therning.org/magnus identi.ca|twitter: magthe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] containers and maps

2009-08-13 Thread Jake McArthur

John Lato wrote:

This looks to be essentially the same as the 'map' function in
ListLike, and suffers from the same problem.  It won't have the
performance characteristics of the native map functions.  Using e.g.
ByteStrings, you're recreating a ByteString by snoc'ing elements.


Oh, I see now what you are after. You're right. This wouldn't play nice 
when creating ByteStrings (which is probably why there is no instance 
for Reducer Char ByteString).



This might work with UVector (I intend to try it this evening); I
don't know how well the fusion framework will hold up in class
dictionaries.


Do report back, as I am curious as well.


Still, the monoids package is very powerful (and I'd completely
forgotten it).  Perhaps there's another approach that would work?


Yay, something to mull over! :)

- Jake
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] unsafeDestructiveAssign?

2009-08-13 Thread Roberto Zunino

Job Vranish wrote:

Does anybody know if there is some unsafe IO function that would let me do
destructive assignment?
Something like:

a = 5
main = do
  veryUnsafeAndYouShouldNeverEveryCallThisFunction_DestructiveAssign a 8
  print a

8


Untested, just guessing:

  {-# NOILINE aRef , a #-}
  aRef = unsafePerformIO (newRef 0)
  a = unsafePerformIO aRef
  main = do
 use a-- 0 if you are lucky
 writeRef aRef 5
 use a-- 5 if you are lucky

However, I would not in the least be surprised if your program stops 
working whenever your cat purrs.


And yes, I find the above code quite disgusting.

Zun.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cleaner networking API - network-fancy

2009-08-13 Thread John A. De Goes


Thank goodness for a cleaner networking API. I almost chose Haskell's  
socket API as an example of what _not_ to do in my series on Good API  
Design (http://jdegoes.squarespace.com/journal/2009/5/11/good-api-design-part-3.html 
).


Ended up going with Java though. :-)

Regards,

John A. De Goes
N-Brain, Inc.
The Evolution of Collaboration

http://www.n-brain.net|877-376-2724 x 101

On Aug 13, 2009, at 2:14 AM, Taru Karttunen wrote:


Hello

network-fancy offers a cleaner API to networking facilities in
Haskell. It supports high-level operations on tcp, udp and unix
sockets.

I would like some feedback on the API
http://hackage.haskell.org/packages/archive/network-fancy/0.1.4/doc/html/Network-Fancy.html

In particular:
* Does the type of the server function in dgramServer make sense?
 or would (packet - Address - (packet - IO ()) - IO ()) be
 better?
* Does the StringLike class make sense?
* Any other suggestions?

- Taru Karttunen
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: DDC compiler and effects; better than Haskell? (was Re: [Haskell-cafe] unsafeDestructiveAssign?)

2009-08-13 Thread John A. De Goes
What if you have another program, written in C or something, that  
monitors a file for changes, and if so changes the contents of  
another file? Surely to catch that you must mark *all* file system  
access as interefering? Even worse, another program could monitor  
the state of a file and conditionally disable thet network driver,  
now file access interferes with network access.



A compiler or runtime system can't know about these kinds of things --  
unless perhaps you push the effect system into the operating system  
(interesting idea). The best you can do is ensure the program itself  
is correct in the absence of interference from other programs, but  
there's no way to obtain a guarantee in the presence of interference.  
Either with an effect system, or without (think of all the sequential  
imperative code that gets broken when other programs concurrently  
tamper with the file system or networking, etc.).


Regards,

John A. De Goes
N-Brain, Inc.
The Evolution of Collaboration

http://www.n-brain.net|877-376-2724 x 101

On Aug 13, 2009, at 2:42 AM, Sebastian Sylvan wrote:




On Thu, Aug 13, 2009 at 4:56 AM, John A. De Goes j...@n-brain.net  
wrote:


The next step is to distinguish between reading file A and reading  
file B,
between reading file A and writing file A, between reading one part  
of file A and writing another part of file A, etc. When the effect  
system can carry that kind of information, and not just for files,  
but network, memory, etc., then you'll be able to do some extremely  
powerful parallelization  optimization.


What if you have another program, written in C or something, that  
monitors a file for changes, and if so changes the contents of  
another file? Surely to catch that you must mark *all* file system  
access as interefering? Even worse, another program could monitor  
the state of a file and conditionally disable thet network driver,  
now file access interferes with network access.



--
Sebastian Sylvan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: DDC compiler and effects; better than Haskell? (was Re: [Haskell-cafe] unsafeDestructiveAssign?)

2009-08-13 Thread John A. De Goes


Hmmm, bad example. Assume memory instead. That said, reordering/ 
parallelization of *certain combinations of* writes/reads to  
independent files under whole program analysis is no less safe than  
sequential writes/reads. It just feels less safe, but the one thing  
that will screw both up is interference from outside programs.


Regards,

John A. De Goes
N-Brain, Inc.
The Evolution of Collaboration

http://www.n-brain.net|877-376-2724 x 101

On Aug 13, 2009, at 3:45 AM, Jason Dusek wrote:


2009/08/12 John A. De Goes j...@n-brain.net:

The next step is to distinguish between reading file A and
reading file B, between reading file A and writing file A,
between reading one part of file A and writing another part of
file A, etc. When the effect system can carry that kind of
information, and not just for files, but network, memory,
etc., then you'll be able to do some extremely powerful
parallelization  optimization.


 I am challenged to imagine optimizations that would be safe in
 the case of File I/O.

--
Jason Dusek


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: DDC compiler and effects; better than Haskell? (was Re: [Haskell-cafe] unsafeDestructiveAssign?)

2009-08-13 Thread John A. De Goes

On Aug 13, 2009, at 4:09 AM, Alberto G. Corona wrote:

Maybe, in Haskell, the coarse IO monad can be divided in smaller  
monads as well



I don't even want to imagine how that would obfuscate otherwise  
straightforward looking monadic code.


The root problem is that monads don't capture enough information on  
the nature of effects, which limits composability. What's needed is  
something richer, which gives the compiler enough information to do  
all those things that make life easy for the developer, whilst  
maximizing the performance of the application. DDC is an interesting  
experiment in that direction.


Regards,

John A. De Goes
N-Brain, Inc.
The Evolution of Collaboration

http://www.n-brain.net|877-376-2724 x 101


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] IFL 2009: Final Call for Papers and Participation

2009-08-13 Thread IFL 2009
Call for Papers and ParticipationIFL 2009Seton Hall UniversitySouth Orange, NJ, USAhttp://tltc.shu.edu/blogs/projects/IFL2009/Register at: http://tltc.shu.edu/blogs/projects/IFL2009/registration.html* NEW *Registration and talk submission deadline fast approaching: August 23, 2009***The 21st International Symposium on Implementation and Application of Functional Languages, IFL 2009, will be held for the first time in the USA. The hosting institution is Seton Hall University in South Orange, NJ, USA and the symposium dates are September 23-25, 2009. It is our goal to make IFL a regular event held in the USA and in Europe. The goal of the IFL symposia is to bring together researchers actively engaged in the implementation and application of functional and function-based programming languages. IFL 2009 will be a venue for researchers to present and discuss new ideas and concepts, work in progress, and publication-ripe results related to the implementation and application of functional languages and function-based programming.Following the IFL tradition, IFL 2009 will use a post-symposium review process to produce a formal proceedings which will be published by Springer in the Lecture Notes in Computer Science series. All participants in IFL 2009 are invited to submit either a draft paper or an extended abstract describing work to be presented at the symposium. These submissions will be screened by the program committee chair to make sure they are within the scope of IFL and will appear in the draft proceedings distributed at the symposium. Submissions appearing in the draft proceedings are not peer-reviewed publications. After the symposium, authors will be given the opportunity to incorporate the feedback from discussions at the symposium and will be invited to submit a revised full arcticle for the formal review process. These revised submissions will be reviewed by the program committee using prevailing academic standards to select the best articles that will appear in the formal proceedings.Invited Speaker: Benjamin C. Pierce University of Pennsylvania Talk Title: How To Build Your Own Bidirectional Programming LanguageTOPICSIFL welcomes submissions describing practical and theoretical work as well as submissions describing applications and tools. If you are not sure if your work is appropriate for IFL 2009, please contact the PC chair at ifl2...@shu.edu. Topics of interest include, but are not limited to:language concepts type checking contractscompilation techniques staged compilationruntime function specializationruntime code generation partial evaluation (abstract) interpretation generic programming techniques automatic program generation array processing concurrent/parallel programming concurrent/parallel program execution functional programming and embedded systems functional programming and web applications functional programming and security novel memory management techniques runtime profiling and performance measurements debugging and tracing virtual/abstract machine architectures validation and verification of functional programs tools and programming techniques FP in EducationPAPER SUBMISSIONSProspective authors are encouraged to submit papers or extended abstracts to be published in the draft proceedings and to present them at the symposium. All contributions must be written in English, conform to the Springer-Verlag LNCS series format and not exceed 16 pages. The draft proceedings will appear as a technical report of the Department of Mathematics and Computer Science of Seton Hall University.IMPORTANT DATESRegistration deadline August 23, 2009Presentation submission deadline August 23, 2009IFL 2009 Symposium September 23-25, 2009Submission for review process deadline November 1, 2009Notification Accept/Reject December 22, 2009Camera ready version February 1, 2010PROGRAM COMMITTEEPeter Achten University of Nijmegen, The NetherlandsJost Berthold Philipps-Universität Marburg, GermanyAndrew Butterfield University of Dublin, IrelandRobby Findler Northwestern University, USAKathleen Fisher ATT Research, USACormac Flanagan University of California at Santa Cruz, USAMatthew Flatt University of Utah, USAMatthew Fluet Toyota Technological Institute at Chicago, USADaniel Friedman Indiana University, USAAndy Gill University of Kansas, USAClemens Grelck University of Amsterdam/Hertfordshire, The Netherlands/UKJurriaan Hage Utrecht University, The NetherlandsRalf Hinze Oxford University, UKPaul Hudak Yale University, USAJohn Hughes Chalmers University of Technology, SwedenPatricia Johann University of Strathclyde, UKYukiyoshi Kameyama University of Tsukuba, JapanMarco T. Morazán (Chair) Seton Hall University, USARex Page University of Oklahoma, USAFernando Rubio Universidad Complutense de Madrid, SpainSven-Bodo Scholz University of Hertfordshire, UKManuel Serrano INRIA Sophia-Antipolis, FranceChung-chieh Shan Rutgers University, USADavid Walker Princeton University, USAViktória Zsók 

RE: [Haskell-cafe] Re: DDC compiler and effects; better than Haskell?

2009-08-13 Thread Sittampalam, Ganesh
What would preOrder foldr/foldl mean? What about preOrder (reverse . map) and 
preOrder (map . reverse) ?

Another option would be for map to take a strategy as a parameter, sort of 
like Control.Parallel.Strategies.

Peter Verswyvelen wrote:
 Well, in DDC I believe the order is left to right.
 
 But you guys are right, many orders exist.
 
 On the other hand, a language might offer primitives to convert
 pure-to-effectfull functions no, in which you indicate the order you
 want.  
 
 e.g. preOrder map
 
 No?
 
 (anyway Oleg's reply seems to give a definite answer to this thread
 no? :-) 
 
 On Thu, Aug 13, 2009 at 11:06 AM, Heinrich
 Apfelmusapfel...@quantentunnel.de wrote: 
 Russell O'Connor wrote:
 Peter Verswyvelen wrote:
 
 I kind of agree with the DDC authors here; in Haskell as soon as a
 function has a side effect, and you want to pass that function to a
 pure higher order function, you're stuck, you need to pick the
 monadic version of the higher order function, if it exists. So
 Haskell doesn't really solve the modularity problem, you need two
 versions of each higher order function really,
 
 Actually you need five versions: The pure version, the pre-order
 traversal, the post-order traversal, the in-order traversal, and the
 reverse in-order traversal.  And that is just looking at syntax.  If
 you care about your semantics you could potentially have more (or
 less). 
 
 Exactly! There is no unique choice for the order of effects when
 lifting a pure function to an effectful one.
 
 For instance, here two different versions of an effectful  map :
 
   mapM f []     = return []
   mapM f (x:xs) = do
       y  - f x
       ys - mapM f xs
       return (y:ys)
 
   mapM2 f []     = return []
   mapM2 f (x:xs) = do
       ys - mapM2 f xs
       y  - f x
       return (y:ys)
 
 Which one will the DCC compiler chose, given
 
   map f []     = []
   map f (x:xs) = f x : map f xs
 
 ? Whenever I write a pure higher order function, I'd also have to
 document the order of effects.
 
 
 Regards,
 apfelmus
 
 --
 http://apfelmus.nfshost.com
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


=== 
 Please access the attached hyperlink for an important electronic 
communications disclaimer: 
 http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html 
 
=== 
 
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Thinking about what's missing in our library coverage

2009-08-13 Thread Jason Dagit
On Mon, Aug 3, 2009 at 10:53 PM, Don Stewart d...@galois.com wrote:

 alexander.dunlap:
o pandoc — markdown, reStructuredText, HTML, LaTeX, ConTeXt,
 Docbook, OpenDocument, ODT, RTF, MediaWiki, groff
 
  No. Pandoc is too actively developed to go into the HP. It's also much
  more of an end-user application than a standard library - it's
  applications are not general enough to be included in the standard
  distribution.
 

 One comment on your thoughtful post.

 What role does having unique capabilities for the Haskell Platform play?

 Our base library is already notable for having OpenGL support out of the
 box. Maybe markup/markdown formats (for example) would also help Haskell
 stand out from the crowd. A similar case would be gtk2hs out of the box
 (Python supplied Tcl guis).


I just thought of something I wanted to use Haskell for at work.  It would
be a tool used internally on windows and osx.  I was thinking to myself,
Well, it would be nice if it had a GUI and the deps for building it were
easy to satisfy.  Naturally I looked at what packages the HP provides and I
was disappointed to find out that other than OpenGL/GLUT and Win32 there are
no GUI libraries provided.  So a cross platfrom GUI library would be much
appreciated.  Whether that's wxHaskell, gtk2hs, or something else is not
terribly important to me.  On a side note, SDL support would be a nice
addition to the OpenGL support.  I think the other dependencies for what I
have in mind are easily satisfied by the HP as it is.

It would also be nice if we had some sort of web development platform as
part of the HP.  Those .NET folks have all these neat add-on libraries that
just come bundled.  Makes me jealous.  Cabal-install makes things much
easier overall so maybe I shouldn't complain...

Thanks for the HP!

Jason
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: DDC compiler and effects; better than Haskell?

2009-08-13 Thread roconnor

On Thu, 13 Aug 2009, rocon...@theorem.ca wrote:

Actually you need five versions: The pure version, the pre-order traversal, 
the post-order traversal, the in-order traversal, and the reverse in-order 
traversal.  And that is just looking at syntax.  If you care about your 
semantics you could potentially have more (or less).


Minor technical correction.  The four syntactic traversals are: pre-order, 
post-order, reverse pre-order, and reverse-post order.  The in-order and 
reverse in-order are examples of other semantic traversals specific to 
binary tree like structures.


--
Russell O'Connor  http://r6.ca/
``All talk about `theft,''' the general counsel of the American Graphophone
Company wrote, ``is the merest claptrap, for there exists no property in
ideas musical, literary or artistic, except as defined by statute.''
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re : [Haskell-cafe] Improving event management in elerea

2009-08-13 Thread jean legrand
 
 http://hpaste.org:80/fastcgi/hpaste.fcgi/view?id=8159
 
 and I have two questions.
 
 1) my event function (line 79) returns a new signal and
 takes 3 arguments : an initial signal and a Bool signal
 which triggers when to apply the transform function given as
 a third argument. The body of the event function uses two
 calls to the sampler function. Is it possible to write this
 body with a unique call to sampler ?
 

Second question solved with

event ini b f = mdo
  let gs = generator eb $ (flat.f) $ s'
  eb - edge b
  s' - sampler $ storeJust ini gs
  s -  sampler $ storeJust s' gs
  return s

but I still can't answer my first one (couldn't this function -or its 
improvement- be part of the library ?)




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: DDC compiler and effects; better than Haskell? (was Re: [Haskell-cafe] unsafeDestructiveAssign?)

2009-08-13 Thread Gregory Michael Travis

John A. De Goes j...@n-brain.net sed:
 Hmmm, bad example. Assume memory instead. That said, reordering/
 parallelization of *certain combinations of* writes/reads to
 independent files under whole program analysis is no less safe than
 sequential writes/reads. It just feels less safe, but the one thing
 that will screw both up is interference from outside programs.

This reminds me of something I've heard of: the commutative monad.  I
might have seen the phrase in some presentation by Simon Peyton-Jones.

I took it to mean some sort of monad that sequenced computations, but
was also capable of reordering them according to rules specified by
the implementer of the monad.  Thus, you could re-order operations on
separate files, because they can't possible interfere with each other.

This strikes me as an important idea, something that would allow you
to incrementally optimize while keeping safety.

For example, let's say you implement a complex distributed system,
using a single distributed state monad to manage global state.  This
would be very slow, since you would no doubt have to have a central
node controlling the order of all side-effects.  But this node could
re-group and re-distributed the side-effects.  This would be done by
using the type system to carefully separate effects that couldn't
interfere.

I'm not expressing this as well as I'd like.  From the programmer's
perspective, though, it could have a nice property: you start with a
slow, but trivial state monad.  Then you add rules and clauses that
indicate when two effects can have their order switched; all the
while, the meaning of the program is unchanged.

This is not unlike nodes distributed database that have to receive
updates from other nodes and figure out how to apply all the updates
in a coherent order, or throw an exception if it cannot.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] matching types in Haskell

2009-08-13 Thread pat browne
Hi,
 I wish to align or match some Haskell types in order to represent a set
of ontologies and their embedded concpts [1]. I am not sure wheather
modules or classes could be used. I have done this task in a language
called Maude[2]. The code is below. I appreciate that Maude is not very
popular so there is a brief description below. This diagram below
represents the semantics of what I am trying to achive. I want to
construct MERGEDONTOLOGY from ONTOLOGY1 and ONTOLOGY2 both of which are
based on COMMON.

   MERGEDONTOLOGY
{ Woman, RiverBank, FinancialBank, HumanBeing}
/\  /\
   / \
  /\
 /  \
/ \
   /   \
  / \

ONTOLOGY1  ONTOLOGY2
{Woman, Bank, Person}   {Woman, Bank, Human}
 /\  /\
  \  /
\   /
 \ /
   \ /
\  /
 \   /
  \ /
{Woman ,  Person}
   COMMMON



Each Maude module represents (below) a distinct ontology, each sort (or
type in Haskell) represents a concept.  This example includes both
synonyms and homonyms.
I want to align ONTOLOGY1 and ONTOLOGY2 w.r.t COMMON to produce
MERGEDONTOLOGY. The details of the concepts are as follows.

The Woman concept should be the same in all ontologies, there is only
one Woman concept and it is named as such in each ontology.
Hence there should be only one Woman.MERGEDONTOLOGY

There is only one concept HumanBeing.MERGEDONTOLOGY, but there are 3
synonyms Human.ONTOLOGY2, Person.ONTOLOGY1, and Person.COMMON
It would seem that Person.COMMON  should be mapped (or renamed?) to
Human.ONTOLOGY2 which in turn is renamed (or mapped) to
HumanBeing.MERGEDONTOLOGY.

The homonyms are Bank.ONTOLOGY1 and Bank.ONTOLOGY2 which should become
distinct entities in RiverBank.MERGEDONTOLOGY and
FinancialBank.MERGEDONTOLOGY

My question is how should this be done in Haskell.
Is there a classes only solution?
Is ther a module only solution?
Or do I need both?

Regards,
Pat
=
Brief description of Maude code
In the Maude Language the module is the basic conceptual unit.
Sorts are declared in modules
Every line ends with a space dot.
In this case each module is bracketed by fth .. endfth
 There are various types of mappings between modules.
 In this case I used renamings where the syntax is
 *( name_in_source_module to  name_in_target_module)
 The keyword protecting is a type of inport.
The + symbol between the modules forms a new module that includes all
the information in each of its arguments
==
fth COMMON is
sorts Woman Person .
endfth

fth  ONTOLOGY1 is
protecting COMMON .
sorts Bank .
endfth


fth ONTOLOGY2 is
protecting COMMON *( sort Person to Human) .
sorts Bank .
endfth


fth MERGEDONTOLOGY is
including (ONTOLOGY1 * (sort Bank to RiverBank,sort Person to
HumanBeing)) + (ONTOLOGY2 * (sort Human to HumanBeing, sort Bank to
FinancialBank))  .
endfth

with the command
show all MERGEDONTOLOGY
the system will produce the following module
fth MERGEDONTOLOGY0 is
  sorts Woman HumanBeing FinancialBank RiverBank .
endfth


Reference
 [1]Shapes of Alignments Construction, Combination, and Computation by
Oliver Kutz, Till Mossakowsk, and Mihai Codescu
 http://www.informatik.uni-bremen.de/~okutz/shapes.pdf
 [2] Informa tiom on Maude at: http://maude.cs.uiuc.edu/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Cleaner networking API - network-fancy

2009-08-13 Thread Thomas DuBuisson
I'm considering putting not insignificant effort into reworking the
Network API during HacPDX.  Assuming Johan agrees with the changes
(and I don't know exactly what those are yet), I'd like to leave
Network.Socket for general use and rework the Network module to allow
easier TCP/UDP sending and receiving, binding to particular IP
addresses, using bytestrings, receiving data with IP Headers, and
sending with IP headers.  Multicast is another good idea that should
be in there.

In general, the community would probably benefit if all these small
packages (network, network-data, network-fancy, network-bytestring,
network-multicast, network-server) gave way to fewer, more complete
packages.

Right now the thought has came to me that the cleanest, most uniform
method might be to have a Config data type with all these ideas as
options and use a single 'connect', 'listen' or 'receive' function for
all the different protocols and their various options.  I'll think on
it.

Thomas

On Thu, Aug 13, 2009 at 10:45 AM, Bardur Arantssons...@scientician.net wrote:
 Taru Karttunen wrote:

 Hello

 network-fancy offers a cleaner API to networking facilities in
 Haskell. It supports high-level operations on tcp, udp and unix
 sockets.
 I would like some feedback on the API

 http://hackage.haskell.org/packages/archive/network-fancy/0.1.4/doc/html/Network-Fancy.html

 In particular:
 * Does the type of the server function in dgramServer make sense?
  or would (packet - Address - (packet - IO ()) - IO ()) be
  better?
 * Does the StringLike class make sense?
 * Any other suggestions?

 - Taru Karttunen

 One thing that seems to be missing (and also seems to be missing from the
 GHC standard libraries AFAICT) is listening for multicast UDP. This requires
 some extra socket gymnastics to handle group membership. The details can be
 found in the network-multicast package.

 Cheers,

 --
 /me who wonders where his signature went.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: DDC compiler and effects; better than Haskell? (was Re: [Haskell-cafe] unsafeDestructiveAssign?)

2009-08-13 Thread Sebastian Sylvan
On Thu, Aug 13, 2009 at 2:19 PM, John A. De Goes j...@n-brain.net wrote:

 What if you have another program, written in C or something, that
 monitors a file for changes, and if so changes the contents of another file?
 Surely to catch that you must mark *all* file system access as
 interefering? Even worse, another program could monitor the state of a
 file and conditionally disable thet network driver, now file access
 interferes with network access.


 A compiler or runtime system can't know about these kinds of things --
 unless perhaps you push the effect system into the operating system
 (interesting idea). The best you can do is ensure the program itself is
 correct in the absence of interference from other programs


I think the best you can do is make sure any code which is vulnerable to
such interference won't be subject to unsafe transformations (like changing
the order of evaluation).
So I do think pretty much anything that relies on the outside world needs to
go into one big effects category so the compiler/runtime will stay out
and let the programmer explicitly define the ordering of those operations,
precisely because the compiler has no way of knowing anything about what
kind of assumptions are in effect.


-- 
Sebastian Sylvan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Cleaner networking API - network-fancy

2009-08-13 Thread Johan Tibell
On Thu, Aug 13, 2009 at 8:09 PM, Thomas
DuBuissonthomas.dubuis...@gmail.com wrote:
 I'm considering putting not insignificant effort into reworking the
 Network API during HacPDX.  Assuming Johan agrees with the changes
 (and I don't know exactly what those are yet), I'd like to leave
 Network.Socket for general use and rework the Network module to allow
 easier TCP/UDP sending and receiving, binding to particular IP
 addresses, using bytestrings, receiving data with IP Headers, and
 sending with IP headers.  Multicast is another good idea that should
 be in there.

Designing a correct and usable socket API is difficult. There are lots
of corner cases that are easy to get wrong. For example, the current
socket API always throws an EOF exception if recv returns a zero
length string even though a zero length return value only indicates
EOF when using TCP! Furthermore, the current API uses Strings which
makes no sense.  The library is poorly documented and lacks test
(compare it to e.g. Python's socket module).

There are other pitfalls as well. If you only cover parts of the BSD
API you will alienate a fraction of your users who are forced to write
their own bindings, not an easy task, especially if you want the
binding to be cross platform. This applied to any OS binding.

My best advice would be to form an special interest group (SIG) and
iron out the details. This doesn't have to be anything terribly
formal. Just a bunch of people who are interested in improving things.

The SIG could keep a wiki with the current design. This makes it
easier for both the members and other interested developer to review
the design and find flaws.

 In general, the community would probably benefit if all these small
 packages (network, network-data, network-fancy, network-bytestring,
 network-multicast, network-server) gave way to fewer, more complete
 packages.

I absolutely agree. Hackage has increased the number of libraries a
lot, which is great. However, great libraries often come out of peer
reviews and attention to detail.

As for network-bytestring, it is slated to be merged back into network
under Network.Socket.ByteString if it passes community review.

 Right now the thought has came to me that the cleanest, most uniform
 method might be to have a Config data type with all these ideas as
 options and use a single 'connect', 'listen' or 'receive' function for
 all the different protocols and their various options.  I'll think on
 it.

Write a wiki and explain how nicely this idea solves some of the
current issues. That would be a good start.

Just my 2 cents,

Johan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Cleaner networking API - network-fancy

2009-08-13 Thread Taru Karttunen
Excerpts from Thomas DuBuisson's message of Thu Aug 13 21:09:38 +0300 2009:
 Right now the thought has came to me that the cleanest, most uniform
 method might be to have a Config data type with all these ideas as
 options and use a single 'connect', 'listen' or 'receive' function for
 all the different protocols and their various options.  I'll think on
 it.

UDP does not play nicely with Handles. TCP wants Handles. Thus
using an unified API does not make much sense. The semantics are just
too different. 

- Taru Karttunen
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proposal: TypeDirectedNameResolution

2009-08-13 Thread John Meacham
On Mon, Jul 27, 2009 at 04:41:37PM -0400, John Dorsey wrote:
 I'm assuming that name resolution is currently independent of type
 inference, and will happen before type inference.  With the proposal this is
 no longer true, and in general some partial type inference will have to
 happen before conflicting unqualified names are resolved.
 
 My worry is that the proposal will require a compliant compiler to
 interweave name resolution and type inference iteratively.

Indeed. This is my concern too. I can't see any way to do implement it
in jhc at least without some major hassle. Name Resolution occurs
several stages before typechecking, even before desugaring, having to
intertwine it with type checking would be a complicated affair to say
the least. 

John

-- 
John Meacham - ⑆repetae.net⑆john⑈ - http://notanumber.net/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Proposal: TypeDirectedNameResolution

2009-08-13 Thread John Meacham
On Tue, Jul 28, 2009 at 10:05:29AM -0700, Ryan Ingram wrote:
 On Tue, Jul 28, 2009 at 1:41 AM, Heinrich
 Apfelmusapfel...@quantentunnel.de wrote:
  While I do agree that qualified names are annoying at times, I think
  that type directed name disambiguation is a Pandora's box.
 
 I see where you are going, but I'm not sure I agree.  Let me give an
 example from another language with this kind of resolution: C++.  From
 a purely practical point of view, function overloading in C++ does
 what I want almost all the time.  And when it doesn't do what I want,
 it's always been immediately obvious, and it's a sign that my design
 is flawed.

I would be careful about assuming aspects of C++ will translate to
haskell at the type system level, The reason is that in C++, all type
information flows in one direction, for instance.

int foo(...) {
   ...
   int x = bar(z,y);
   ...
}

all of the types of everything passed to bar and what it returns (x,y,
and z) are fully specified at every call site, so 'name overloading' is
a simple matter of finding the best match. however in haskell this isn't
the case. 'bar' may affect the types used in 'foo'. there is two way
type information passing in haskell. This radically changes the
landscape for what is possible in the two type systems.

John


-- 
John Meacham - ⑆repetae.net⑆john⑈ - http://notanumber.net/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] containers and maps

2009-08-13 Thread John Lato
On Thu, Aug 13, 2009 at 1:51 PM, Jake McArthurjake.mcart...@gmail.com wrote:
 John Lato wrote:


 This might work with UVector (I intend to try it this evening); I
 don't know how well the fusion framework will hold up in class
 dictionaries.

 Do report back, as I am curious as well.

I have just recently hacked together a small test.  The code is at
http://inmachina.net/~jwlato/haskell/testUVector.hs

The task is to generate a list of 1 million Ints (really 1e6-1), map a
multiply by 2, and check to see if any values equal 0.  The uvector
code is (essentially):

 let res = anyU (== ( 0:: Int)) . mapU (* 2) . enumFromToU 1 $ 100

and by itself runs in a small fraction of a second.  Technically the
start and stop values are specified on the command line, but these are
the values I predominantly used.

Here are results for some other modes:

ListLike.map: significantly slower, and runs out of memory (memory
request failed)
ListLike.rigidMap: appears to be the same as using mapU directly
mapReduce:  significantly slower (the 1e6 element test completed after
about an hour), but ran in a very small amount of memory.

It looks like GHC was able to see through the rigidMap function at
least, but I don't know if this would still work with the instance
defined in another file.

Cheers,
John
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Space for commentaries on hackage

2009-08-13 Thread Henning Thielemann


On Wed, 5 Aug 2009, Maurí­cio CA wrote:


It would be nice to have a place for anonimous
comments below each page of a hackage package, maybe
with a cabal option to enable/disable that for a
particular package. Authors of packages with few
users may want that as a way to get first impressions
on their work they would otherwise not get. (At least,
I am, so I thought maybe others probably would.)


I used to use Haskell-Wiki for project homepages, so people can leave 
comments in the according talk pages.___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Improving event management in elerea

2009-08-13 Thread Patai Gergely
Hello Jean,

If I got you right, you basically want to have a signal that changes its
value by a predetermined transform function whenever there is an edge
detected on a bool signal. In that case, it would be easier to define it
in terms of transfer:

accumIf :: a - (a - a) - Signal Bool - SignalMonad (Signal a)
accumIf v0 f b = transfer v0 g b
  where g _dt c x0 = if c then f x0 else x0

event :: a - Signal Bool - (a - a) - SignalMonad (Signal a)
event v0 b f = accumIf v0 f = edge b

I even factored out the edge functionality. Of course you also have to
pass playerX0 to this function as is, without the pure wrapping.

Also, your flat function is practically equivalent to (return . pure),
why did you define it using transfer?

Another thing to note is that you might want to define playerX as an
integral of a velocity derived from lpress and rpress, so you wouldn't
need to press and release the buttons repeatedly, and the speed would be
the same regardless of sampling rate, since integral takes the time step
into account.

 but I still can't answer my first one (couldn't this function
 -or its improvement- be part of the library ?)
Sure, anything is possible. Elerea is a new library, and it deliberately
exposes only some low-level constructs. In particular transfer, which
basically lets you fall back on pure state transformer functions. My
idea was that as more and more code is written in it, we can extract
recurring patterns and add them to the library. But at the moment it is
not clear what the useful patterns are, and I'd rather not start
guessing and littering the library with useless combinators in advance.

For instance, the accumIf function is a conditional version of stateful,
and it might be generally useful. Whether it's the best name, that's
another question.

Gergely

-- 
http://www.fastmail.fm - The professional email service

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Improving MPTC usability when fundeps aren't appropriate?

2009-08-13 Thread Bertram Felgenhauer
Daniel Peebles wrote:
 I've been playing with multiparameter typeclasses recently and have
 written a few uncallable methods in the process. For example, in
 
 class Moo a b where
   moo :: a - a
 
 Another solution would be to artificially force moo to take
 a dummy b so that the compiler can figure out which instance you
 meant. That's what I've been doing in the mean time, but wouldn't it
 be simpler and less hackish to add a some form of instance
 annotation, like a type annotation, that would make it possible to
 specify what instance you wanted when it's ambiguous?

Syntax aside, dummy arguments have a disadvantage when it comes to
optimizing code, because the compiler doesn't know that the dummy
argument is unused in the function; indeed you could define instances
where the dummy argument is used.

For this reason it's technically better to use a newtype instead:

newtype Lambda t a = Lambda { unLambda :: a }

and, say,

class Moo a b where
moo :: Lambda b (a - a)

Note that we can convert between functions taking dummy arguments and
such lambda types easily:

lambdaToDummy :: Lambda t a - t - a
lambdaToDummy a _ = unLambda a

dummyToLambda :: (t - a) - Lambda t a
dummyToLambda f = Lambda (f undefined)

In fact, lambdaToDummy makes a great infix operator:

(@@) :: Lambda t a - t - a
(@@) = lambdaToDummy

infixl 1 @@

Now we can write

moo @@ (undefined :: x)

where with dummy arguments we would write

moo (undefined :: x) .

The compiler will inline (@@) and lambdaToDummy (they're both small)
and produce

   unLambda moo ,

that is, the plain value of type a - a that we wanted to use in the
first place. All this happens after type checking and fixing the
instance of Moo to use.

Bertram
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proposal: TypeDirectedNameResolution

2009-08-13 Thread Twan van Laarhoven

John Meacham wrote:

On Mon, Jul 27, 2009 at 04:41:37PM -0400, John Dorsey wrote:


I'm assuming that name resolution is currently independent of type
inference, and will happen before type inference.  With the proposal this is
no longer true, and in general some partial type inference will have to
happen before conflicting unqualified names are resolved.

My worry is that the proposal will require a compliant compiler to
interweave name resolution and type inference iteratively.



Indeed. This is my concern too. I can't see any way to do implement it
in jhc at least without some major hassle. Name Resolution occurs
several stages before typechecking, even before desugaring, having to
intertwine it with type checking would be a complicated affair to say
the least. 


You can still resolve the names first, while keeping the ambiguity:

data Expr = ...
  | OverloadedVar [UniqueId] -- after name resolution

Then the type checker checks all possible overloads, and in the end only one 
variable reference is left.


TDNR would still complicate the typechecker, since it suddenly needs to do 
backtracking.



Twan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hack on the Delve core, with Delve and Haskell

2009-08-13 Thread spoon


 Could you create a comparison of Delve to other (potentially) similar
 languages?  For example, how is Delve similar/dissimilar to Clojure
 and Scala?
  

I don't have any experience with Clojure at all, but looking at the
page, it would appear that Clojure does not support first-class
continuations, while Delve does.  Delve is closer to scheme in this
respect.

Delve differs from Scala in that Delve is a much more imperative
language.  Classes are not static - they are created at run-time. In
this way, Delve is akin to a well-typed ruby.  

Delve aims to support the open classes  and objects of Smalltalk and
Ruby.  Since it is typed, breaking the encapsulation of an object could
be tracked by the type system - I'd venture to say would be useful for a
situation like a classbox, which modifies classes after they are
defined.

 Delve is released under the terms of the GNU GPLv3.
 
 Note intended as a criticism of the GPL or your decision to use it,
 but does this impact people's ability to use the Delve standard
 libraries in their own non-GPL projects?

Don't worry!  The GPL will be used to protect only the source of the
virtual machine and the compiler - since Delve bytecode is just input
for the VM, it doesn't constitute a modification of the software so it
can be proprietary.  In the same way, it's still GPL to allow
proprietary extensions to be linked into the VM or compiler through
specific interfaces.

- John


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] containers and maps

2009-08-13 Thread Edward Kmett
Yeah, my answer on the ByteString front is to make a bunch of Reducers for a
Builder data type that works like the one in Data.Binary.Builder. It
generates a much more suitable Reducer and can reduce Chars, Strings, Strict
and Lazy UTF8-encoded Bytestrings, etc. I'm using that inside of my toy
compiler right now when I need to generate output, like a Lazy ByteString of
source. (My variant Builder -- ok, my mostly cut and pasted Builder, has
been made slightly smarter and is now tuned to work with UTF8 encoded data,
which is what I've been feeding it lately.) I may migrate it into the
monoids library from my current project if there is interest, but I'm trying
to avoid ballooning that up too far in size again and re-entering dependency
hell.
The monoids library drew a pretty hard distinction between things that build
up nicely (Monoids/Reducers) and things that tear down easily (Generators)
but only works with a few things that do both efficiently (i.e.
Seq/FingerTree) For my purposes this has worked rather well, but raw strict
(and to a lesser degree, lazy) ByteStrings make abysmal Reducers. ;)

In the end, bolting a list-like interface on something that can be _made_ to
implement all of the operations but can't be made to implement them remotely
efficiently, seems like it is asking for trouble, bug reports about
performance, and pain.

On the other hand, FingerTrees of wrapped strict ByteStrings work nicely as
a monoidal lazy bytestring-like structure that provides an efficient snoc,
indexing operation, and very efficient slicing for extracting token values
with maximal sharing. I'll be giving a talk at the next Boston Haskell User
Group in a few days about my abuse of those in a terrible hodge-podge
solution of Iteratees, Parsec 3 and monoids, which yields an efficient
parallel/incremental lexer.

I'll see about posting the slides afterwards.

-Edward Kmett

On Thu, Aug 13, 2009 at 4:43 PM, John Lato jwl...@gmail.com wrote:

 On Thu, Aug 13, 2009 at 1:51 PM, Jake McArthurjake.mcart...@gmail.com
 wrote:
  John Lato wrote:
 
 
  This might work with UVector (I intend to try it this evening); I
  don't know how well the fusion framework will hold up in class
  dictionaries.
 
  Do report back, as I am curious as well.

 I have just recently hacked together a small test.  The code is at
 http://inmachina.net/~jwlato/haskell/testUVector.hs

 The task is to generate a list of 1 million Ints (really 1e6-1), map a
 multiply by 2, and check to see if any values equal 0.  The uvector
 code is (essentially):

  let res = anyU (== ( 0:: Int)) . mapU (* 2) . enumFromToU 1 $ 100

 and by itself runs in a small fraction of a second.  Technically the
 start and stop values are specified on the command line, but these are
 the values I predominantly used.

 Here are results for some other modes:

 ListLike.map: significantly slower, and runs out of memory (memory
 request failed)
 ListLike.rigidMap: appears to be the same as using mapU directly
 mapReduce:  significantly slower (the 1e6 element test completed after
 about an hour), but ran in a very small amount of memory.

 It looks like GHC was able to see through the rigidMap function at
 least, but I don't know if this would still work with the instance
 defined in another file.

 Cheers,
 John
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Improving event management in elerea

2009-08-13 Thread jean legrand
 detected on a bool signal. In that case, it would be easier
 to define it
 in terms of transfer:
 
 accumIf :: a - (a - a) - Signal Bool -
 SignalMonad (Signal a)
 accumIf v0 f b = transfer v0 g b
   where g _dt c x0 = if c then f x0 else x0
 
 event :: a - Signal Bool - (a - a) -
 SignalMonad (Signal a)
 event v0 b f = accumIf v0 f = edge b

nice

 Also, your flat function is practically equivalent to
 (return . pure),
 why did you define it using transfer?

lack of reflex (and reflexion !)

 Another thing to note is that you might want to define
 playerX as an
 integral of a velocity derived from lpress and rpress, so
 you wouldn't
 need to press and release the buttons repeatedly, and the
 speed would be
 the same regardless of sampling rate, since integral takes
 the time step
 into account.
 

or

event :: a - Signal Bool - (a - a) - SignalMonad (Signal a)
event v0 b f = accumIf v0 f = return b

?

 For instance, the accumIf function is a conditional version
 of stateful,
 and it might be generally useful. Whether it's the best
 name, that's
 another question.
 
 Gergely

thank you for your very helpful post





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe