Re: proposal for trailing comma and semicolon

2013-05-18 Thread Heinrich Apfelmus

Tillmann Rendel wrote:


I like to put commas at the beginning of lines, because there, I can 
make them line up and it is visually clear that they are all at the same 
nesting level. I like how the commas look a bit like bullet points. For 
example, I would write:


items =
  [ "red"
  , "blue"
  , "green"
  ]

Could we extend Garett's proposal to also allow prefixing the first 
element of a list with a comma, to support this style:


items = [
  , "red"
  , "blue"
  , "green"
  ]

Allowing an optional extra comma both at the beginning and at the end 
would allow programmers the choice where they want to put their commas.


This is the style I am using for records and lists as well. Here an 
example from actual code


data EventNetwork = EventNetwork
{ actuate :: IO ()
, pause   :: IO ()
}

These days, all my record definitions look like that.

Allowing a superfluous leading comma would be great, because that makes 
it easier to move around the first line.



Best regards,
Heinrich Apfelmus

--
http://apfelmus.nfshost.com


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: Scoping rule change

2012-07-25 Thread Heinrich Apfelmus

Heinrich Apfelmus wrote:

Lennart Augustsson wrote:

It's not often that one gets the chance to change something as
fundamental as the scoping rules of a language.  Nevertheless, I would
like to propose a change to Haskell's scoping rules.

The change is quite simple.  As it is, top level entities in a module
are in the same scope as all imported entities.  I suggest that this
is changed to that the entities from the module are in an inner scope
and do not clash with imported identifiers.

Why?  Consider the following snippet

module M where
import I
foo = True


I like it.

That said, how does the the fact that the scope is nested affect the 
export list? If the module scope is inside the scope of the imports, 
then this means the name  I.foo  should appear in the export list, not 
foo , because the latter is in the outermost scope.


I think the solution to these problems is to rearrange the  import 
declarations so that the syntax mirrors the scoping rules. In other 
words, I boldly propose to move the  import  declaration *before* the 
module  declaration, i.e.


   import I
   module M where
   foo = True

or even

   import I where
   module M where
   foo = True

This way, it is clear that the module M opens an inner scope and that 
the export list of M uses the names from the inner scope.


Actually, the latter syntax should be

   import I in ...
   let import I in ...

The idea is that this mirrors a  let  expression. (The "where" keyword 
would be misleading.)



Best regards,
Heinrich Apfelmus

--
http://apfelmus.nfshost.com


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: Scoping rule change

2012-07-25 Thread Heinrich Apfelmus

Lennart Augustsson wrote:

It's not often that one gets the chance to change something as
fundamental as the scoping rules of a language.  Nevertheless, I would
like to propose a change to Haskell's scoping rules.

The change is quite simple.  As it is, top level entities in a module
are in the same scope as all imported entities.  I suggest that this
is changed to that the entities from the module are in an inner scope
and do not clash with imported identifiers.

Why?  Consider the following snippet

module M where
import I
foo = True


I like it.

That said, how does the the fact that the scope is nested affect the 
export list? If the module scope is inside the scope of the imports, 
then this means the name  I.foo  should appear in the export list, not 
foo , because the latter is in the outermost scope.


I think the solution to these problems is to rearrange the  import 
declarations so that the syntax mirrors the scoping rules. In other 
words, I boldly propose to move the  import  declaration *before* the 
module  declaration, i.e.


   import I
   module M where
   foo = True

or even

   import I where
   module M where
   foo = True

This way, it is clear that the module M opens an inner scope and that 
the export list of M uses the names from the inner scope.



Best regards,
Heinrich Apfelmus

--
http://apfelmus.nfshost.com


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: String != [Char]

2012-03-24 Thread Heinrich Apfelmus

Edward Kmett wrote:

Like I said, my objection to including Text is a lot less strong than
my feelings on any notion of deprecating String.

[..]

The pedagogical concern is quite real, remember many introductory
lanuage classes have time to present Haskell and the list data type
and not much else. Showing parsing through pattern matching on
strings makes a very powerful tool, its harder to show that with
Text.

[..]

The major benefits of Text come from FFI opportunities, but even
there if you dig into its internals it has to copy out of the array
to talk to foreign functions because it lives in unpinned memory
unlike ByteString.


I agree with Edward Kmett on the virtues of  String = [Char]  for 
learning Haskell. I'm teaching beginners regularly and it is simply 
eye-opening for them that they can use the familiar list operations to 
solve real world problems which usually involve textual data.


Which brings me to the fundamental question behind this proposal: Why do 
we need Text at all? What are its virtues and how do they compare? What 
is the trade-off? (I'm not familiar enough with the Text library to 
answer these.)


To put it very pointedly: is a %20 performance increase on the current 
generation of computers worth the cost in terms of ease-of-use, when the 
performance can equally be gained by buying a faster computer or more 
RAM? I'm not sure whether I even agree with this statement, but this is 
the trade-off we are deciding on.



Best regards,
Heinrich Apfelmus

--
http://apfelmus.nfshost.com


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: PROPOSAL: Include record puns in Haskell 2011

2010-02-26 Thread Heinrich Apfelmus
Simon Marlow wrote:
> While I agree with these points, I was converted to record punning
> (actually record wildcards) when I rewrote the GHC IO library.  Handle
> is a record with 12 or so fields, and there are literally dozens of
> functions that start like this:
> 
>   flushWriteBuffer :: Handle -> IO ()
>   flushWriteBuffer Handle{..} = do
> 
> if I had to write out the field names I use each time, and even worse,
> think up names to bind to each of them, it would be hideous.

What about using field names as functions?

flushWriteBuffer h@(Handle {}) = do
... buffer h ...

Of course, you always have to drag  h  around.


Regards,
Heinrich Apfelmus

--
http://apfelmus.nfshost.com

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: Hexadecimal floating point constants

2010-02-20 Thread Heinrich Apfelmus
Nick Bowler wrote:
> I'd like to propose what I believe is a simple but valuable extension to
> Haskell that I haven't seen proposed elsewhere.
> 
> C has something it calls hexadecimal floating constants, and it would be
> very nice if Haskell had it too.  For floating point systems where the
> radix is a power of two (very common), they offer a means of clearly and
> exactly specifying any finite floating point value.
>
> [..]
> 
> Similarly, the greatest finite double value can be written as
> 0x1.fp+1023.
> 
> These constants have the form
> 
>   0x[HH][.H]p[+/-]DDD

If you don't want to wait on an (uncertain) inclusion into the Haskell
standard, you can implement a small helper function to that effect
yourself; essentially using  encodeFloat .


Regards,
Heinrich Apfelmus

--
http://apfelmus.nfshost.com

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Unsafe hGetContents

2009-10-11 Thread Heinrich Apfelmus
Iavor Diatchki wrote:
> Hello,
> 
> well, I think that the fact that we seem to have a program context
> that can distinguish "f1" from "f2" is worth discussing because I
> would have thought that in a pure language they are interchangable.
> The question is, does the context in Oleg's example really distinguish
> between "f1" and "f2"?  You seem to be saying that this is not the
> case:  in both cases you end up with the same non-deterministic
> program that reads two numbers from the standard input and subtracts
> them but you can't assume anything about the order in which the
> numbers are extracted from the input---it is merely an artifact of the
> GHC implementation that with "f1" the subtraction always happens the
> one way, and with "f2" it happens the other way.
>
> I can (sort of) buy this argument, after all, it is quite similar to
> what happens with asynchronous exceptions (f1 (error "1") (error "2")
> vs f2 (error "1") (error "2")).  Still, the whole thing does not
> "smell right":  there is some impurity going on here, and trying to
> offload the problem onto the IO monad only makes reasoning about IO
> computations even harder (and it is petty hard to start with).  So,
> discussion and alternative solutions should be strongly encouraged, I
> think.

To put it in different words, here an elaboration on what exactly the
non-determinism argument is:


Consider programs  foo1  and  foo2  defined as

foo :: (a -> b -> c) -> IO String
foo f = Control.Exception.catch
(evaluate (f (error "1") (error "2")) >> return "3")
(\(ErrorCall s) -> return s)

foo1  = foo f1  where  f1 x y = x `seq` y `seq` ()
foo2  = foo f2  where  f2 x y = y `seq` x `seq` ()

Knowing how exceptions and  seq  behave in GHC, it is straightforward to
prove that

foo1  = return "1"
foo2  = return "2"

which clearly violates referential transparency. This is bad, so the
idea is to disallow the proof.


In particular, the idea is that referential transparency can be restored
if we only allow proofs that work for all evaluation orders, which is
equivalent to introducing non-determinism. In other words, we are only
allowed to prove

foo1  = return "1"  or  return "2"
foo2  = return "1"  or  return "2"

Moreover, we can push the non-determinism into the IO type and pretend
that pure functions  A -> B  are semantically lifted to  Nondet A ->
Nondet B  with some kind of  fmap .


The same goes for  hGetContents : if you use it twice on the same
handle, you're only allowed to prove non-deterministic behavior, which
is not very useful if you want a deterministic program. But you are
allowed to prove deterministic results if you use it with appropriate
caution.


In other words, the language semantics guarantees less than GHC actually
does. In particular, the semantics only allows reasoning that is
independent of the evaluation order and this means to treat IO as
non-deterministic in certain cases.


Regards,
apfelmus

--
http://apfelmus.nfshost.com

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Haskell 2010: libraries

2009-07-16 Thread Heinrich Apfelmus
Simon Marlow wrote:
> Ian Lynagh wrote:
>> Simon Marlow wrote:
>>> But there's a solution: we could remove the "standard" modules from
>>> base, and have them only provided by haskell-std (since base will just
>>> be a re-exporting layer on top of base-internals, this will be easy to
>>> do).  Most packages will then have dependencies that look like
>>>
>>>build-depends: base-4.*, haskell-std-2010
>>
>> We'll probably end up with situations where one dependency of a package
>> needs haskell-std-2010, and another needs haskell-std-2011. I don't know
>> which impls support that at the moment.
> 
> That's the case with base-3/base-4 at the moment.  Is it a problem?

I think the issue raised is the diamond import problem, for instance
that say the list type from  haskell-std-2010  is spuriously different
from the one in  haskell-std-2011 .

This would affect new programs based on the 2011 standard that want to
use older libraries based on the 2010 standard; the point being that the
latter are "intentionally" not updated to the newer standard.

Of course, that's just the base-3 / base-4  issue which can be solved;
it's just that it's not automatic but needs explicit work by
implementors every time there is a new library standard.


Regards,
apfelmus

--
http://apfelmus.nfshost.com

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Haskell 2010: libraries

2009-07-10 Thread Heinrich Apfelmus
Simon Marlow wrote:
> Heinrich Apfelmus wrote:
>>
>> If I understand that correctly, this would mean to simply include the
>> particular version of a library that happens to be the current one at
>> the report deadline. In other words, the report specifies that say
>> version 4.1.0.0 of the base library is the standard one for 2010.
>>
>> Since old library versions are archived on hackage, this looks like a
>> cheap and easy solution to me. It's more an embellishment of alternative
>> 1. than a genuine 3.
> 
> So, just to be clear, you're suggesting that we
> 
>   - remove the whole of the Library Report,
> 
>   - declare a list of packages and versions that we consider
> to be the standard libraries for Haskell 2010.

Yes.

> This would be a bold step, in that we would be effectively standardising
> a lot more libraries than the current language standard.  The base
> package is a fairly random bag of library modules, for instance.  It
> contains a lot of modules that are only implemented by GHC.  It contains
> backwards compatibility stuff (Control.OldException), and stuff that
> doesn't really belong (Data.HashTable).  Perhaps we could explicitly
> list the modules that the standard requires.

Oh, that sounds more bold than I expected it to be. Yes, I agree that we
should exclude modules that don't really belong; this should be cheap to
implement.

> On the other hand, this would be a useful step, in that it gives users a
> wide base of libraries to rely on.  And it's cheap to implement in the
> report.
> 
> Any other thoughts?

The way I imagine it is that the libraries thus standardized will *not*
be the libraries that most people are going to use; the latest versions
of the  base  library or the Haskell Platform will define a current set
of "standard" libraries.

Rather, I imagine the libraries standardized in the report to be a
reference for writing code that does not need to be updated when  base
or the HP change. Put differently, if I put the

  {-# LANGUAGE Haskell'2010 #-}

flag into my source code, then I'm assured that it will compile for all
eternity because my favorite compiler is going to use the  base  library
specified in the report instead of the newest  base  library available
on hackage. This requires compiler support.

In other words, this is option 1. embellished with the cheapest way of
blessing a bunch libraries for the purpose of backward compatibility.


This may not be the best solution to the  backward compatibility VS
libraries change  dilemma, but I think it reflects current practice. I
can write strict H98 if I want to, but most of the time I'm just going
to use the latest  base  anyway.



On a side note, if Haskell 2010 gets a library report, then I think this
should be in the form of a simple package on hackage, named something
like "haskell2010-libraries".


Regards,
apfelmus

--
http://apfelmus.nfshost.com

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Haskell 2010: libraries

2009-07-08 Thread Heinrich Apfelmus
Bulat Ziganshin wrote:
> Simon Marlow wrote:
> 
>>   3. Update the libraries to match what we have at the moment.
>>  e.g. rename List to Data.List, and add the handful of
>>  functions that have since been added to Data.List.  One
>>  problem with this is that these modules are then tied to
>>  the language definition, and can't be changed through
>>  the usual library proposal process.
> 
> not necessarily. we already apply versioning to these libs, it may be
> made official in Report too. i.e. Report defines libraries standard
> for year 2010 (like it defines language standard for only one year),
> while we continue to improve libraries that will eventually become
> version standard for year 2011 (or higher)

If I understand that correctly, this would mean to simply include the
particular version of a library that happens to be the current one at
the report deadline. In other words, the report specifies that say
version 4.1.0.0 of the base library is the standard one for 2010.

Since old library versions are archived on hackage, this looks like a
cheap and easy solution to me. It's more an embellishment of alternative
1. than a genuine 3.


Regards,
apfelmus

--
http://apfelmus.nfshost.com

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Suggestion regarding (.) and map

2008-04-25 Thread apfelmus

Dan Doel wrote:
If you do want to generalize (.), you have to decide whether you 
want to generalize it as composition of arrows, or as functor application. 
The former isn't a special case of the latter (with the current Functor, at 
least).


By annotating functors with the category they operate on, you can 
reconcile both seemingly different generalizations


   class Category (~>) => Functor (~>) f where
  (.) :: (a ~> b) -> (f a -> f b)

  -- functor application
   instance Functor (->) [] where
  (.) = map

  -- arrow composition
   instance Category (~>) => Functor (~>) (d ~>) where
  (.) = (<<<)


Regards,
apfelmus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Meta-point: backward compatibility

2008-04-24 Thread apfelmus

Chris Smith wrote:
I'm definitely not arguing for this ($) 
associativity change, for example, and my objection is the backward 
compatibility.  But ultimately, it's more like a combination of 
incompatibility and the lack of a really compelling story on why it 
should be one way or the other.  I have a hard time calling this a "fix"; 
it's more like a change of personal taste.


The $ problem is an interesting one, for it has the following properties:

 1) Someone who learns Haskell for the first time will not complain
about either associativity. Whether it's left or right associative
is rather unimportant, it is how it is and either way is fine.

 2) But once the decision has been made, it's nigh impossible to change
it, simply because everybody has relied on one particular
associativity of $ all the time.

 3) Assuming that this associativity is a matter of taste, it only
sounds fair to experiment and try different tastes. So, Cale has a
Prelude with left associative $ and Lennart has a different personal
Prelude as well and I'm going too taste a sip of either one and
brew my own functional coffee. But in that situation, the defining
property of a standard, namely that everybody uses the same $ , is
gone, everybody would have to check which one is used in which
module.

It's like being forced to eat chicken only because chicken lay eggs. 
Some may like it, some may not, but the eggs will hatch new chicken who 
lay new eggs for all eternity :)



Regards,
apfelmus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: patch applied (haskell-prime-status): add ""Make $ left associative, like application"

2008-04-23 Thread apfelmus

Cale Gibbard wrote:

apfelmus wrote:


Unfortunately, the identity functor currently can't be overloaded,
although I think it would be unambiguous.



Unfortunately, it would be quite ambiguous -- the identity functor
overlaps with basically any other. Consider the case:

reverse . [[1,2,3],[4,5]]

which if (.) is fmap would normally mean [[3,2,1],[5,4]], but if the
identity functor is used instead would mean [[4,5],[1,2,3]].


Whoops, what I did think here? I somehow thought that the argument of 
the function applied ( reverse  in this case) would fix the functor. But 
this only works if the function is monomorphic:


  (reverse :: [[Int]] -> [[Int]]) .  [[1,2,3],[4,5]]
  (reverse :: [Int] -> [Int]) .  [[1,2,3],[4,5]]

not if it's polymorphic.


Regards,
apfelmus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: patch applied (haskell-prime-status): add ""Make $ left associative, like application"

2008-04-23 Thread apfelmus

Dan Doel wrote:

3) Left associative ($) is consistent with left associative ($!).

   (f $! x) y z
   ((f $! x) $! y) $! z

Left associative, these are:

   f $! x $ y $ z
   f $! x $! y $! z


Nice! Subconsciously, the fact that ($!) is currently not left 
associative has always bitten me.



In the light of Cale's plan to make (.) equivalent to  fmap , there is 
also the option to redefine ($) to mean  fmap  . This would eliminate 
the need for a special <$> for applicative functors.


Note that setting (.) or ($) = fmap  subsumes function application, 
because we have


  fmap :: (a -> b) -> a -> b

for the /identity functor/. In other words, the current ($) and (.) are 
just special cases of the general  fmap  . Unfortunately, the identity 
functor currently can't be overloaded, although I think it would be 
unambiguous.



Regards,
apfelmus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Status of Haskell Prime Language definition

2007-10-18 Thread apfelmus

Iavor Diatchki wrote:

http://hackage.haskell.org/trac/haskell-prime/wiki/FunctionalDependencies#Lossofconfluence


There is nothing about the system being unsound there.  Furthermore, I
am unclear about the problem described by the link...  The two sets of
predicates are logically equivalent (have the same set of ground
instances), here is a full derivation:

(B a b, C [a] c d)
using the instance
(B a b, C [a] c d, C [a] b Bool)
using the FD rule
(B a b, C [a] c d, C [a] b Bool, b = c)
using b = c
(B a b, C [a] c d, C [a] b Bool, b = c, C [a] b d)
omitting unnecessary predicates and rearranging:
(b = c, B a b, C [a] b d)

The derivation in the other direction is much simpler:
(b = c, B a b, C [a] b d)
using b = c
(b = c, B a b, C [a] b d, C [a] c d)
omitting unnecessary predicates
(B a b, C [a] c d)


You're right, I think I'm mixing up soundness with completeness and 
termination. My thought was that not explicitly mentioning the b=c 
constraint could lead to the type inference silently dropping this fact 
and letting an inconsistent set of instance declarations "go through" 
without noticing. But that would only be important in a setting where 
inconsistent instances are not reported early via the consistency 
condition but late when actually constructing terms. The consistency 
condition should be enough for soundness (no inconsistent axioms 
accepted), but I didn't dwell enough into FD to know such things for sure.


Regards,
apfelmus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Status of Haskell Prime Language definition

2007-10-16 Thread apfelmus

Iavor Diatchki wrote:

apfelmus wrote:

fundeps are too tricky to get powerful and sound at the same time.


I am not aware of any soundness problems related to functional
dependencies---could you give an example?


http://hackage.haskell.org/trac/haskell-prime/wiki/FunctionalDependencies#Lossofconfluence

But I should have said "sound, complete and decidable" instead :)

Regards,
apfelmus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Status of Haskell Prime Language definition

2007-10-16 Thread apfelmus

Robert Will wrote:

Could someone please summarize the current status and planned time
line for Haskell'?


John Launchbury wrote:
Up to now, the Haskell' effort has been mostly about exploring the 
possibilities, to find out what could be in Haskell', and to scope out 
what it might mean. We've now reached the stage where we want to do the 
opposite, namely trying to pin down what we definitely want to have in 
the standard, and what it should look like in detail.


There's still a major technical obstacle, namely  functional 
dependencies  vs  associated type synonyms . Some functionality for 
programming in the type system is needed for Haskell' but fundeps are 
too tricky to get powerful and sound at the same time. The problem with 
their promising alternative of associated type synonyms is that they're 
very young with their first official release being the upcoming GHC 6.8 
. So, they have to stand some test of time before Haskell' can pick one 
of the two (probably the latter).


Regards,
apfelmus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Make it possible to evaluate monadic actions when assigning record fields

2007-07-12 Thread apfelmus
apfelmus wrote:
> In the end, I think that applicatively used monads are the wrong
> abstraction.

Simon Peyton-Jones wrote:
> Can you be more explicit?  Monadic code is often over-linearised.
> I want to generate fresh names, say, and suddenly I have to name
> sub-expressions. Not all sub-expressions, just the effectful ones.

Neil Mitchell wrote:
> The monad in question simply supplies free variables, so could be
> applied in any order.

I see, the dreaded name-supply problem. Well, it just seems that monads
are not quite the right abstraction for that one, right? (Despite that
monads make up a good implementation). In other words, my opinion is
that it's not the monadic code that is over-linearized but the code that
is over-monadized.

The main property of a "monad" for name-supply is of course

 f >> g  =  g >> f

modulo alpha-conversion. Although we have to specify an order, it's
completely immaterial. There _has_ to be a better abstraction than
"monad" to capture this!

SPJ:
> It'a a pain to define liftM_yes_no_yes which takes an effectful
> argument in first and third position, and a non-effectful one as
> the second arg:
>
> liftM_yes_no_yes :: (a->b->c->m d)
> -> m a -> b -> m c -> m d
> 
> What a pain.  So we have either
> 
> do { ...; va <- a; vc <- c; f va b vc; ... }
> 
> or
> do { ...; liftM_yes_no_yes f a b c; ...}
> 
> or, with some syntactic sugar...
> 
> do { ...; f $(a) b $(c); ...}
>
> The liftM solution is even more awkward if I want
>
>   f (g $(a)) b c
>
> for example.

(the last one is already a typo, i guess you mean  f $(g $(a)) b c)

Neil:
>   -- helpers, ' is yes, _ is no
>
> coreLet__  x y = f $ CoreLet  x y
> coreLet_'  x y = f . CoreLet  x =<< y
>
> coreLet x y = f $ CoreLet x y
>
> f (CoreApp (CoreLet bind xs) ys) = coreLet bind $(coreApp xs ys)
>

Uhm, but you guys know that while (m a -> a) requires the proposed
syntactic sugar, (a -> m a) is easy?

  r = return

  elevateM  f x1 = join $ liftM f x1
  elevateM3 f x1 x2 x3 = join $ liftM3 f x1 x2 x3

  do { ...; elevateM3 f a (r$ b) c; ...}
  elevateM3 f (elevateM g a) (r$ b) (r$ c)


  coreLet x y = liftM2 CoreLet x y >>= f
  g (CoreApp (CoreLet bind xs) ys) = coreLet (r$ bind) (coreApp xs ys)

In other words, you can avoid creating special yes_no_yes wrappers by
creating a yes_yes_yes wrapper and turning a no into a yes here and
there. No need for turning yes into no.

One could even use left-associative infix operators

  ($@)  :: (a -> b) -> a -> b
  ($@@) :: Monad m => (m a -> b) -> a -> b
  ($@)  = id
  ($@@) = id . return

and currying

  elevateM3 f $@@ (elevateM g $@@ a) $@ b $@ c
  g (CoreApp (CoreLet bind xs) ys) = coreLet $@ bind $@@ coreApp xs ys

The intention is that a (mixed!) sequence of operators should parse as

  f $@ x1 $@@ x2 $@ x3 = ((f $@ x1) $@@ x2) $@ x3


Leaving such games aside, the fact that yes_yes_yes-wrappers subsumes
the others is a hint that types like

  NameSupply Expr -> NameSupply Expr -> NameSupply Expr

are fundamental. In other words, the right type for expressions is
probably not  Expr  but  NameSupply Expr  with the interpretation that
the latter represents expressions with "holes" where the concrete names
for variables are filled in. The crucial point is that holes may be
_shared_, i.e. supplying free variable names will fill several holes
with the same name. Put differently, the question is: how to share names
without giving concrete names too early? I think it's exactly the same
question as

  How to make sharing observable?

This is a problem that haunts many people and probably every
DSL-embedder (Lava for Hardware, Pan for Images, Henning Thielemann's
work on sound synthesis, Frisby for parser combinators). In a sense,
writing a Haskell compiler is similar to embedding a DSL.

I have no practical experiences with the name-supply problem. So, the
first question is: can the name-supply problem indeed be solved by some
form of observable sharing? Having a concrete toy-language showing
common patterns of the name-supply problem would be ideal for that.

The second task would be to solve the observable sharing problem, _that_
would require some syntactic sugar. Currently, one can use MonadFix to
"solve" it. Let's take parser combinators as an example. The
left-recursive grammar

  digit   -> 0 | .. | 9
  number  -> number' digit
  number' -> ε | number

can be represented by something like

  mdo
digit   <- newRule $ foldr1 (|||) [0...9]
number  <- newRule $ number' &&& digit
number' <- newRule $ empty   ||| number

This way, we can observe the sharing and break the left recursion. But
of co

Re: Make it possible to evaluate monadic actions when assigning record fields

2007-07-12 Thread apfelmus
Adde wrote:
> apfelmus wrote:
>> In any case, I'm *strongly against* further syntactic sugar for
>> monads, including #1518. The more tiresome monads are, the more
>> incentive you have to avoid them.
>
> Monads are a part of Haskell. The more tiresome monads are to use, the
> more tiresome Haskell is to use. I suggest we leave the decision of
> where and when to use them to each individual user of the language.

Well, only the monads will remain as "tiresome" as they are now. Also,
the most intriguing fact about monads (or rather about Haskell) is that
they are not a (built-in) part of the language, they are "just" a type
class. Sure, there is do-notation, but >>= is not much clumsier than that.

In the end, I think that applicatively used monads are the wrong
abstraction. For occasional use, liftM2 and `ap` often suffice. If the
applicative style becomes prevalent, then Applicative Functors are
likely to be the conceptually better choice. This is especially true for
MonadReader. Arithmetic expressions are a case for liftM, too. And an
instance (Monad m, Num a) => Num (m a)  allows to keep infix (+) and (*).

Put differently, I don't see a compelling use-case for the proposed
syntax extension. But I've seen many misused monads.

Regards,
apfelmus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Make it possible to evaluate monadic actions when assigning record fields

2007-07-11 Thread apfelmus
Wouter Swierstra wrote:
> 
> On 11 Jul 2007, at 08:38, Simon Peyton-Jones wrote:
> 
>> Another alternative (which I got from Greg Morrisett) that I'm toying
>> with is this.  It's tiresome to write
>>
>> do { x <- 
>>; y <- 
>>; f x y }
>>
>> In ML I'd write simply
>>
>> f  
> 
> Using Control.Applicative you could already write:
> 
> f <$> x <*> y

No, since f is not a pure function, it's f :: x -> y -> m c. The correct
form would be

  join $ f <$> x <*> y

(Why doesn't haddock document infix precedences?) But maybe some
type-class hackery can be used to eliminate the join.

In any case, I'm *strongly against* further syntactic sugar for monads,
including #1518. The more tiresome monads are, the more incentive you
have to avoid them.

Regards,
apfelmus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: inits is too strict

2007-06-14 Thread apfelmus
> apfelmus:
>> Where can I report a bug report for the report? ;)

Stefan O'Rear wrote:
> http://haskell.org/haskellwiki/Language_and_library_specification
> says:
>
> The report still has minor bugs. There are tracked at the Haskell 98
> bugs page. Report any new bugs to Malcolm Wallace.

Donald Bruce Stewart wrote:
> Well, right here. There are other strictness issues either differing
> from the spec, or not clearly defined (foldl', for example). 

Ah. Actually, I thought that there's something like a bug-tracker for
the standardized libraries. I wouldn't change inits in the existing H98
standard, it's not really worth the hassle. But I'd change it to the
lazy version for Haskell'.

> A useful tool for checking these is the StrictCheck/ChasingBottoms
> library, QuickCheck for partial values, by the way.

Oh, really useful :)


Regards,
apfelmus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


inits is too strict

2007-06-13 Thread apfelmus
Hello,

  inits [] = [[]]
  inits (x:xs) = [[]] ++ map (x:) (inits xs)

as specified in Data.List has a "semantic bug", namely it's too strict:

  inits (1:_|_) = []:_|_

as opposed to the expected

  inits (1:_|_) = []:[1]:_|_

A correct version would be

  inits xs = []:case xs of
  [] -> []
  (x:xs) -> map (x:) (inits xs)

The Haskell report specifies how  inits  has to behave, so this is a
problem in the report, not in a concrete implementation. Where can I
report a bug report for the report? ;)

Regards,
apfelmus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: strict bits of datatypes

2007-03-16 Thread apfelmus
Ross Paterson wrote:
> On Fri, Mar 16, 2007 at 05:40:17PM +0100, apfelmus wrote:
>> the translation loops
>> as we could (should?) apply
>>
>> FinCons
>>  => \x y -> FinCons x $! y
>>  => \x y -> (\x' y' -> FinCons x' $! y') x $! y
>>  => ...
>>
>> ad infinitum.
> 
> Yes, perhaps that ought to be fixed. But even so, this clearly implies that
> 
>   FinCons 3 _|_ = _|_
> 
> and thus that q is _|_ and nhc98/yhc have a bug.

Yes, I agree completely. I should have separated the observation that
the rewrite rule for the translation of strict constructors loops from
the business with q.

Regards,
apfelmus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: strict bits of datatypes

2007-03-16 Thread apfelmus
Jón Fairbairn wrote:
> [EMAIL PROTECTED] writes:
> 
>> Besides, having
>>
>>   let q = FinCons 3 q in q
>>
>> not being _|_ crucially depends on memoization. 
> 
> Does it?

Sorry for having introduced an extra paragraph, I meant that q =/= _|_
under the new WHNF-rule would depend on memoization. At the memory
location of q, hereby marked with *q, evaluation would yield

  *q: q
 =>
  *q: FinCons 3 q

Now, this can be considered "ok" according to the rule because the data
at the location is WHNF and the second argument of FinsCons is WHNF as
well because we just evaluated q to WHNF.

By introducing an extra parameter, the memoization is gone and
evaluation will yield

  q ()
 =>
  FinCons 3 (q ())

The point is that the second argument to FinCons is not WHNF, so we have
to evaluate that further in order to generate only values that conform
to the new WHNF-rule. Of course, this evaluation will diverge now.

With the above, I want to show that the proposed new WHNF-rule gives
non-_|_ values in very special cases only. I don't think that these are
worth it.


Regards,
apfelmus

PS: Your derivations are fine in the case of a non-strict FinCons. But
the point is to make in strict.

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: strict bits of datatypes

2007-03-16 Thread apfelmus
Ian Lynagh wrote:
> Here I will just quote what Malcolm said in his original message:
> 
> The definition of seq is
> seq _|_ b = _|_
> seq  a  b = b, if a/= _|_
> 
> In the circular expression
> let q = FinCons 3 q in q
> it is clear that the second component of the FinCons constructor is not
> _|_ (it has at least a FinCons constructor), and therefore it does not
> matter what its full unfolding is.

Well, in a sense, it's exactly the defining property of strict
constructors that they are not automatically different from _|_.

The translation

> q = FinCons 3 q
> === (by Haskell 98 report 4.2.1/Strictness Flags/Translation
> q = (FinCons $ 3) $! q

is rather subtle: the first FinCons is a strict constructor whereas the
second is "the real constructor". In other words, the translation loops
as we could (should?) apply

FinCons
 => \x y -> FinCons x $! y
 => \x y -> (\x' y' -> FinCons x' $! y') x $! y
 => ...

ad infinitum.

> and in his recent e-mail to me:
> 
> Yes, I still think this is a reasonable interpretation of the Report.  I
> would phrase it as "After evaluating the constructor expression to WHNF,
> any strict fields contained in it are also be guaranteed to be in WHNF."

Referring to WHNF would break the report's preference of not committing
to a particular evaluation strategy. That's already a good reason to
stick with FinCons 3 _|_ = _|_.

Besides, having

  let q = FinCons 3 q in q

not being _|_ crucially depends on memoization. Even with the
characterization by WHNF,

  let q x = FinCons 3 (q x) in q ()

is _|_.


Regards,
apfelmus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: "Higher order syntactic sugar"

2006-12-17 Thread apfelmus
Claus Reinke wrote:
> ooohh.. when I saw the subject, I fully expected a worked out proposal for
> extensible syntax in Haskell, just in time for Christmas. well, maybe
> next year!-)

I'm sorry :( But this is because Santa Claus is not yet interested
in Haskell: he swears on C-- for writing his high performance "real
world" applications used in his Christmas gift delivery company. ;)

>> I mean that one rarely hides a Just constructor like in
> 
> oh? getting rid of nested (case x of {Just this ->..; Nothing -> that})
> is a very good argument in favour of do-notation for Maybe, and I find that
> very natural (for some definition of nature;-).

Ah, I meant it in the sense that Just and Nothing are very special
constructors but that this behavior is wanted for other constructors too:

  data Color a b = Red a | Green a a | Blue b

  instance MonadPlus (Color a) where
...

But now, we are tied again to a specific set of constructors. One might
want to have fall-back semantics for any constructor at hand and that's
what can be achieved with the "lifted let" (<- return, <<-, <--, <==,
let', ...):

  (Red r <-- x, Left y <-- r, ...  ) -- fall-back if anything fails
  `mplus` (Green g g' <-- x, Just k <-- g, ...)

If one wants to hide these things with <- like in the case of Maybe, one
would have to project into Maybe:

   fromRed (Red r) = Just r
   fromRed _   = Nothing
   fromBlue (Blue b) = Just b
   fromBlue _= Nothing
   fromGreen (Green g g') = Just (g,g')
   fromGreen _= Nothing
   fromLeft (Left x) = Just x
   fromLeft _= Nothing

   (do
  r <- fromRed x
  y <- fromLeft r ...)
   `mplus`
   (do
  (g,g') <- fromGreen x
  k  <- g ...)

In this sense, the "lifted let" is more natural for fall-back because it
treats all constructors as equal. Maybe just provides the semantics and
is to be fused away. So I think that while do-notation is more natural
than case-matching for Maybe, the most natural notation for the
fall-back semantics are pattern guards.

Likewise, list comprehension is the most natural style for (MonadPlus
[]). Here, one has normal <-, but boolean guards are sugared.

>> Some "higher order syntactic sugar" melting machine bringing all these
>> candies together would be very cool.
> 
> hooray for extensional syntax!-) syntax pre-transformation that would
> allow me to extend a Haskell parser in library code is something I'd
> really like to see for Haskell, possibly combined with error message
> post-transformation. together, they'd smooth over the main objections
> against embedded DSLs, or allow testing small extensions of Haskell.

Yes, that would be great. But I fear that this will result in dozens of
different "Haskell" incarnations, one more obscure than the other. And
its completely unclear how different syntax alterations would
interoperate with each other.

> I have been wondering in the past why I do not use Template Haskell
> more, [...]but its main use seems to be program-dependent
> program generation, within the limits of Haskell syntax.

True. Compared to Template Haskell, a preprocessor allows syntactic
extensions but is weak at type correctness.


Regards,
apfelmus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


"Higher order syntactic sugar"

2006-12-14 Thread apfelmus
> apfelmus suggested to use '<=' for this purpose, so that,
> wherever monadic generators
> are permitted
> 
>pattern <= expr  ~~> pattern <- return expr

It was to late when i realized that <= is already used as "smaller than
or equal to" :)

Obviously, the difference between the pattern guard <- and the monadic
<- let easily slips by. I think this has to do with the fact that
do-notation is not the natural style for MonadPlus Maybe, the natural
style is more like the current syntax of pattern guards. I mean that one
rarely hides a Just constructor like in

   do
   r <- lookup x map

because returning Maybe is a very special case, there are many other
constructors to match on where one wants fall-back semantics. Of course,
every sum type can be projected to Maybe X =~= X + 1 but this involves
boilerplate. In a sense, do-notation is just not appropriate for
MonadPlus Maybe.

It is somewhat unfortunate that while arrows, monads and pattern guards
(= MonadPlus Maybe) could share the same syntax, it is not advisable to
do so because this introduces quite annoying boilerplate. The most
general syntax is too much for the special case. But there is something
more canonical than completely disjoint syntax: in a sense, Claus'
suggestions are about making the syntax for the special case a *subset*
of the syntax for the more general one.

The "partial order of syntax inclusion" should look something like

 Arrows
\
 \   MonadPlus
  \  /
 Monad

Even though arrows are more general than monads (less theorems hold),
they require more syntax. On the other hand, MonadPlus provides more
than a monad, so it needs a new syntax, too.

Remember that these are not the only computation abstractions. Syntactic
sugar for pseudo-let-declarations (akin to MonadFix but order
independent, can be embedded using observable sharing) is advisable,
too. Only applicative functors behave very nicely and fit into current
Haskell syntax (maybe that's the reason why they have been discovered
only lately? :). In a sense, even ordinary Haskell (= pure functions) is
only "syntactic sugar". Some "higher order syntactic sugar" melting
machine bringing all these candies together would be very cool.


Regards,
apfelmus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Are pattern guards obsolete?

2006-12-13 Thread apfelmus
Iavor Diatchki wrote:
> I am not clear why you think the current notation is confusing...
> Could you give a concrete example?  I am thinking of something along
> the lines:  based on how "<-" works in list comprehensions and the do
> notation, I would expect that pattern guards do XXX but instead, they
> confusingly do YYY.  I think that this will help us keep the
> discussion concrete.

Pattern guards basically are a special-case syntactic sugar for
(instance MonadPlus Maybe). The guard

foo m x
| empty m = bar
| Just r <- lookup x m, r == 'a' = foobar

directly translates to

foo m x = fromMaybe $
   (do { guard (empty m); return bar;})
 `mplus`
   (do {Just r <- return (lookup m x); guard (r == 'a');
   return foobar;})

The point is that the pattern guard notation

Just r <- lookup m x

does *not* translate to itself but to

Just r <- return (lookup m x)

in the monad. The <- in the pattern guard is a simple let binding. There
is no monadic action on the right hand side of <- in the pattern guard.
Here, things get even more confused because (lookup m x) is itself a
Maybe type, so the best translation into (MonadPlus Maybe) actually would be

r <- lookup m x


Regards,
apfelmus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: String literals

2006-11-13 Thread apfelmus
>>>
>>> what about pattern matching?
>>
>> Yes, pattern matching is the issue that occurs to me too.
>> While string literals :: ByteString would be nice (and other magic
>> encoded in string literals,  I guess), what is the story for pattern
>> matching on strings based on non-inductive types like arrays?
>
> Pattern matching would work like pattern matching with numeric
> literals does. You'll have to use equality comparison.  To pattern
> match the string type would have to be in Eq as well.

Mh, that's a showcase for Views. Something like

view IsString a => String of a where ...

That is, one has an already existing type that serves as a view for
another one. Perhaps, Views should be more like class declarations with
asssociated constructors

class IsString a where
[]  :: a
(:) :: Char -> a -> a

Very similar to the new (G)ADT syntax and some kind of polymorphic
variants with "virtual" constructors, isn't it?


Anyway, the pattern guard approach would be to *not* allow string
literals in pattern matches:

patty bs
   | "pattern string" == bs = flip id id . flip id


I think it's very unfair not to have general Views when now both
polymorphic integers and string literals are to be allowed in pattern
matching.


Regards,
apfelmus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Fractional/negative fixity?

2006-11-08 Thread apfelmus
Lennart Augustsson wrote:
> 
> On Nov 7, 2006, at 11:47 ,
> [EMAIL PROTECTED] wrote:
> 
>> Henning Thielemann wrote:
>>> On Tue, 7 Nov 2006, Simon Marlow wrote:
>>>
>>>> I'd support fractional and negative fixity.  It's a simple change to
>>>> make, but we also have to adopt
> [...]
>>
>> I think that computable real fixity levels are useful, too. A further
>> step to complex numbers is not advised because those cannot be ordered.
> 
> But ordering of the computable reals is not computable.  So it could
> cause the compiler to loop during parsing. :)

Actually, that's one of the use cases ;)

Regards,
apfelmus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Fractional/negative fixity?

2006-11-07 Thread apfelmus
Henning Thielemann wrote:
> On Tue, 7 Nov 2006, Simon Marlow wrote:
> 
>> I'd support fractional and negative fixity.  It's a simple change to
>> make, but we also have to adopt
>>
>> http://hackage.haskell.org/cgi-bin/haskell-prime/trac.cgi/wiki/FixityResolution
>>
>> I've added the proposal to the end of that page.  In fact, the page
>> already mentioned that we could generalise fixity levels, but it didn't
>> mention fractional or negative values being allowed.
> 
> Maybe that page could also mention earlier proposals and the solutions
> without precedence numbers. I prefer the non-numeric approach with rules
> like "(<) binds more tightly than (&&)", because it says what is intended
> and it allows to make things unrelated that are unrelated, e.g. infix
> operators from different libraries. Consequently a precedence relation to
> general infix operators like ($) and (.) had be defined in each library.

I think that computable real fixity levels are useful, too. A further
step to complex numbers is not advised because those cannot be ordered.

But to be serious, the non-numeric rule based approach yields
lattice-valued fixity levels. If we use a CPO, we gain ultimate
expressiveness by being able to express fixity levels as fixed points of
continuous functionals!

Regards,
apfelmus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Replacing and improving pattern guards with PMC syntax

2006-10-03 Thread apfelmus
> The
> problem is not that there is syntactic sugar for pattern matching,
> but that this isn't sugar coating at all - there is functionality hidden
> in there that cannot be provided by the remainder of the language.
> 
> In other words, pattern matching and associated "sugar" become
> part of Haskell's core, which thus becomes more complex,
> without offering sufficient compensation in terms of expressiveness.

I agree. The pattern matching problem is best solved by supplying sugar
which either is compositional or fundamental.

The compositional structure behind pattern guards is of course
(MonadPlus Maybe). So an idea could be to give the plus in MonadPlus
suitable syntactic sugar (which it currently lacks) and get pattern
guards for free. This can be too much, but at least there might be some
kind of sugar for the specific (MonadPlus Maybe) which actually yields
pattern guards.

I think of sugar along the lines of the following example

f (Right (Right p)) = p
f (Left p)  = p
f p = p

and its sugared version

f p = fromJust $
| Right q <= p
| Right r <= q  = r
| Left q <= p   = q
|   = p

where the nested pattern is split in two parts for purpose of
demonstration. I don't know whether this can be parsed, but it's
intended to be parenthesized as follows:

{| {Right q <= p; {| { Right r <= q; = r;}}; };
{| {Left q <= p; = q;}};
{| {= p;}};

The intention is that | behaves like do with the extra feature that
adjacent | are collected together by `mplus`. So the desugaring of a
list of | statements is like

data | a = | a
desugarBar :: [| (Maybe a)] -> Maybe a
desugarBar xs = foldr1 mplus [expr | {| expr} <- xs ]

Further,

   pat <= expr

is equivalent to

   pat <- return (expr)

and that's why we add <= different from <-. Note that <= is not
equivalent to {let pat = expr;} and this is actually the whole point of
the story.

The {= p;} should of course desugar to {return p;} and can somehow end a
| scope. It might be difficult to parse but looks much better than return p.


Inside the |, things are like in do notation. This means that the
delimiter is (;) and not (,) and we have full (<-) access to monadic
actions of type (Maybe a):

| Right q <= p; Right r <= q   = r
<==>
do
Right q <- return p
Right r <- return q
return r

| val <- lookup key xs; val2 <- lookup key2 xs; = val1+val2
<==>
do
val  <- lookup key xs
val2 <- lookup key2 xs
return (val1 + val2)


It's possible to nest | as we all know it from do

| Right q <= p;
{| Left r <= p   = r
 | Right r <= p  = r
}

with curly braces added only for clarity. Layout should eliminate them.
Note how this works nicely with the fact the the last statement in do
notation implicitly determines the returned value.

Another thing to consider are boolean guards which could be
automatically enclosed by a corresponding (guard):

| Right q <= p; p > 5;= p-5
<==>
do
Right q <- return p
guard (p > 5)
return (p-5)


One last thing is to eliminate fromJust:

f x
| (interesting things here)

should be syntactic sugar for

f x = fromJust $
| (interesting things here)



Regards,
apfelmus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime