Re: Records in Haskell

2012-01-08 Thread Gábor Lehel
2012/1/8 Greg Weber g...@gregweber.info:


 2012/1/8 Gábor Lehel illiss...@gmail.com

 Thank you. I have a few questions/comments.



 The module/record ambiguity is dealt with in Frege by preferring
 modules and requiring a module prefix for the record if there is
 ambiguity.

 I think I see why they do it this way (otherwise you can't refer to a
 module if a record by the same name is in scope), but on the other
 hand it would seem intuitive to me to choose the more specific thing,
 and a record feels more specific than a module. Maybe you could go
 that way and just not give your qualified imports the same name as a
 record? (Unqualified imports are in practice going to be hierarchical,
 and no one's in the habit of typing those out to disambiguate things,
 so I don't think it really matters if qualified records shadow them.)


 In the case where a Record has the same name as its containing module it
 would be more specific than a module, and preferring it makes sense. I think
 doing this inside the module makes sense, as one shouldn't need to refer to
 the containing module's name. We should think more about the case where
 module  records are imported.



 Expressions of the form x.n: first infer the type of x. If this is
 just an unbound type variable (i.e. the type is unknown yet), then
 check if n is an overloaded name (i.e. a class operation). [...] Under
 no circumstances, however, will the notation x.n contribute in any way
 in inferring the type of x, except for the case when n is a class
 operation, where an appropriate class constraint is generated.

 Is this just a simple translation from x.n to n x? What's the
 rationale for allowing the x.n syntax for, in addition to record
 fields, class methods specifically, but no other functions?


 It is a simple translation from x.n to T.n x
 The key point being the function is only accessible through the record's
 namespace.
 The dot is only being used to tap into a namespace, and is not available for
 general function application.

I think my question and your answer are walking past each other here.
Let me rephrase. The wiki page implies that in addition to using the
dot to tap into a namespace, you can also use it for general function
application in the specific case where the function is a class method
(appropriate class constraint is generated etc etc). I don't
understand why. Or am I misunderstanding?






 Later on you write that the names of record fields are only accessible
 from the record's namespace and via record syntax, but not from the
 global scope. For Haskell I think it would make sense to reverse this
 decision. On the one hand, it would keep backwards compatibility; on
 the other hand, Haskell code is already written to avoid name clashes
 between record fields, so it wouldn't introduce new problems. Large
 gain, little pain. You could use the global-namespace function as you
 do now, at the risk of ambiguity, or you could use the new record
 syntax and avoid it. (If you were to also allow x.n syntax for
 arbitrary functions, this could lead to ambiguity again... you could
 solve it by preferring a record field belonging to the inferred type
 over a function if both are available, but (at least in my current
 state of ignorance) I would prefer to just not allow x.n for anything
 other than record fields.)


 Perhaps you can give some example code for what you have in mind - we do
 need to figure out the preferred technique for interacting with old-style
 records. Keep in mind that for new records the entire point is that they
 must be name-spaced. A module could certainly export top-level functions
 equivalent to how records work now (we could have a helper that generates
 those functions).

Let's say you have a record.

data Record = Record { field :: String }

In existing Haskell, you refer to the accessor function as 'field' and
to the contents of the field as 'field r', where 'r' is a value of
type Record. With your proposal, you refer to the accessor function as
'Record.field' and to the contents of the field as either
'Record.field r' or 'r.field'. The point is that I see no conflict or
drawback in allowing all of these at the same time. Writing 'field' or
'field r' would work exactly as it already does, and be ambiguous if
there is more than one record field with the same name in scope. In
practice, existing code is already written to avoid this ambiguity so
it would continue to work. Or you could write 'Record.field r' or
'r.field', which would work as the proposal describes and remove the
ambiguity, and work even in the presence of multiple record fields
with the same name in scope.

The point is that I see what you gain by allowing record fields to be
referred to in a namespaced way, but I don't see what you gain by not
allowing them to be referred to in a non-namespaced way. In theory you
wouldn't care because the non-namespaced way is inferior anyways, but
in practice because all existing Haskell code does it that 

Re: Records in Haskell

2012-01-08 Thread Gábor Lehel
2012/1/8 Gábor Lehel illiss...@gmail.com:
 2012/1/8 Greg Weber g...@gregweber.info:


 2012/1/8 Gábor Lehel illiss...@gmail.com

 Thank you. I have a few questions/comments.



 The module/record ambiguity is dealt with in Frege by preferring
 modules and requiring a module prefix for the record if there is
 ambiguity.

 I think I see why they do it this way (otherwise you can't refer to a
 module if a record by the same name is in scope), but on the other
 hand it would seem intuitive to me to choose the more specific thing,
 and a record feels more specific than a module. Maybe you could go
 that way and just not give your qualified imports the same name as a
 record? (Unqualified imports are in practice going to be hierarchical,
 and no one's in the habit of typing those out to disambiguate things,
 so I don't think it really matters if qualified records shadow them.)


 In the case where a Record has the same name as its containing module it
 would be more specific than a module, and preferring it makes sense. I think
 doing this inside the module makes sense, as one shouldn't need to refer to
 the containing module's name. We should think more about the case where
 module  records are imported.



 Expressions of the form x.n: first infer the type of x. If this is
 just an unbound type variable (i.e. the type is unknown yet), then
 check if n is an overloaded name (i.e. a class operation). [...] Under
 no circumstances, however, will the notation x.n contribute in any way
 in inferring the type of x, except for the case when n is a class
 operation, where an appropriate class constraint is generated.

 Is this just a simple translation from x.n to n x? What's the
 rationale for allowing the x.n syntax for, in addition to record
 fields, class methods specifically, but no other functions?


 It is a simple translation from x.n to T.n x
 The key point being the function is only accessible through the record's
 namespace.
 The dot is only being used to tap into a namespace, and is not available for
 general function application.

 I think my question and your answer are walking past each other here.
 Let me rephrase. The wiki page implies that in addition to using the
 dot to tap into a namespace, you can also use it for general function
 application in the specific case where the function is a class method
 (appropriate class constraint is generated etc etc). I don't
 understand why. Or am I misunderstanding?






 Later on you write that the names of record fields are only accessible
 from the record's namespace and via record syntax, but not from the
 global scope. For Haskell I think it would make sense to reverse this
 decision. On the one hand, it would keep backwards compatibility; on
 the other hand, Haskell code is already written to avoid name clashes
 between record fields, so it wouldn't introduce new problems. Large
 gain, little pain. You could use the global-namespace function as you
 do now, at the risk of ambiguity, or you could use the new record
 syntax and avoid it. (If you were to also allow x.n syntax for
 arbitrary functions, this could lead to ambiguity again... you could
 solve it by preferring a record field belonging to the inferred type
 over a function if both are available, but (at least in my current
 state of ignorance) I would prefer to just not allow x.n for anything
 other than record fields.)


 Perhaps you can give some example code for what you have in mind - we do
 need to figure out the preferred technique for interacting with old-style
 records. Keep in mind that for new records the entire point is that they
 must be name-spaced. A module could certainly export top-level functions
 equivalent to how records work now (we could have a helper that generates
 those functions).

 Let's say you have a record.

 data Record = Record { field :: String }

 In existing Haskell, you refer to the accessor function as 'field' and
 to the contents of the field as 'field r', where 'r' is a value of
 type Record. With your proposal, you refer to the accessor function as
 'Record.field' and to the contents of the field as either
 'Record.field r' or 'r.field'. The point is that I see no conflict or
 drawback in allowing all of these at the same time. Writing 'field' or
 'field r' would work exactly as it already does, and be ambiguous if
 there is more than one record field with the same name in scope. In
 practice, existing code is already written to avoid this ambiguity so
 it would continue to work. Or you could write 'Record.field r' or
 'r.field', which would work as the proposal describes and remove the
 ambiguity, and work even in the presence of multiple record fields
 with the same name in scope.

 The point is that I see what you gain by allowing record fields to be
 referred to in a namespaced way, but I don't see what you gain by not
 allowing them to be referred to in a non-namespaced way. In theory you
 wouldn't care because the non-namespaced way is inferior 

Re: Records in Haskell

2012-01-08 Thread Ingo Wechsung
2012/1/8 Gábor Lehel illiss...@gmail.com


 The second is that only the author of the datatype could put functions
 into its namespace; the 'data.foo' notation would only be available
 for functions written by the datatype's author, while for every other
 function you would have to use 'foo data'. I dislike this special
 treatment in OO languages and I dislike it here.


Please allow me to clarify as far as Frege is concerned.
In Frege, this is not so, because implementations of class functions in an
instance will be linked back to the  instantiated types name space. Hence
one could do the following:

module RExtension where

import original.M(R)-- access the R record defined in module original.M

class Rextension1 r where
  firstNewFunction :: .
  secondNewFunction :: .

instance Rextension1 R where
 -- implementation for new functions

And now, in another module one could

import RExtension()  -- equivalent to qualified import in Haskell

and, voilá, the new functions are accessible (only) through R


-- 
Mit freundlichen Grüßen
Ingo
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-01-08 Thread Gábor Lehel
2012/1/8 Ingo Wechsung ingo.wechs...@googlemail.com:


 2012/1/8 Gábor Lehel illiss...@gmail.com


 The second is that only the author of the datatype could put functions
 into its namespace; the 'data.foo' notation would only be available
 for functions written by the datatype's author, while for every other
 function you would have to use 'foo data'. I dislike this special
 treatment in OO languages and I dislike it here.


 Please allow me to clarify as far as Frege is concerned.
 In Frege, this is not so, because implementations of class functions in an
 instance will be linked back to the  instantiated types name space. Hence
 one could do the following:

 module RExtension where

 import original.M(R)    -- access the R record defined in module original.M

 class Rextension1 r where
       firstNewFunction :: .
       secondNewFunction :: .

 instance Rextension1 R where
      -- implementation for new functions

 And now, in another module one could

 import RExtension()      -- equivalent to qualified import in Haskell

 and, voilá, the new functions are accessible (only) through R

Ah, I see. And that answers my other question as well about why you
would special case class methods like this. Thanks. I think I prefer
Disciple's approach of introducing a new keyword alongside 'class' to
distinguish 'virtual record fields' (which get put in the namespace)
from any old class methods (which don't). Otherwise the two ideas seem
very similar. (While at the same time I still dislike the
wrong-direction aspect of both.)

It strikes me that this kind of 'virtual record fields' (or
'projectors') thing is trying to tackle a very similar problem space
as views and view patterns do. Namely, a kind of implementation
hiding, eliding the difference between information which happens to
represented by actual physical data and information which is merely
calculated from that data. So you could in theory later change the
representation (which fields are stored and which are calculated)
without the client code noticing. There also seems to be an analogy
between view patterns and simple function-based virtual-record-fields
on the one hand, which are read-only, and full views and a theoretical
lens-based virtual-record-fields on the other hand, which you can also
use for update. (I don't see any way around having to write both the
access and update function manually though, unless someone figures out
how to invert functions in the general case). I don't have any
follow-on thoughts from this at the moment but it seemed interesting
to note.



 --
 Mit freundlichen Grüßen
 Ingo



-- 
Work is punishment for failing to procrastinate effectively.

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ConstraintKinds and default associated empty constraints

2012-01-08 Thread Bas van Dijk
On 23 December 2011 17:44, Simon Peyton-Jones simo...@microsoft.com wrote:
 My attempt at forming a new understanding was driven by your example.

 class Functor f where
    type C f :: * - Constraint
    type C f = ()

 sorry -- that was simply type incorrect.  () does not have kind *  -
 Constraint

So am I correct that the `class Empty a; instance Empty a` trick is
currently the only way to get default associated empty constraints?

Will this change in GHC-7.4.1? (For example by having an overloaded `()`)

The reason I ask is I would like to know if it's already feasible to
start proposing adding these associated constraints to Functor,
Applicative and Monad.

Cheers,

Bas

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-01-08 Thread Greg Weber
2012/1/8 Gábor Lehel illiss...@gmail.com

  
 
 
 
  Later on you write that the names of record fields are only accessible
  from the record's namespace and via record syntax, but not from the
  global scope. For Haskell I think it would make sense to reverse this
  decision. On the one hand, it would keep backwards compatibility; on
  the other hand, Haskell code is already written to avoid name clashes
  between record fields, so it wouldn't introduce new problems. Large
  gain, little pain. You could use the global-namespace function as you
  do now, at the risk of ambiguity, or you could use the new record
  syntax and avoid it. (If you were to also allow x.n syntax for
  arbitrary functions, this could lead to ambiguity again... you could
  solve it by preferring a record field belonging to the inferred type
  over a function if both are available, but (at least in my current
  state of ignorance) I would prefer to just not allow x.n for anything
  other than record fields.)
 
 
  Perhaps you can give some example code for what you have in mind - we do
  need to figure out the preferred technique for interacting with old-style
  records. Keep in mind that for new records the entire point is that they
  must be name-spaced. A module could certainly export top-level functions
  equivalent to how records work now (we could have a helper that generates
  those functions).

 Let's say you have a record.

 data Record = Record { field :: String }

 In existing Haskell, you refer to the accessor function as 'field' and
 to the contents of the field as 'field r', where 'r' is a value of
 type Record. With your proposal, you refer to the accessor function as
 'Record.field' and to the contents of the field as either
 'Record.field r' or 'r.field'. The point is that I see no conflict or
 drawback in allowing all of these at the same time. Writing 'field' or
 'field r' would work exactly as it already does, and be ambiguous if
 there is more than one record field with the same name in scope. In
 practice, existing code is already written to avoid this ambiguity so
 it would continue to work. Or you could write 'Record.field r' or
 'r.field', which would work as the proposal describes and remove the
 ambiguity, and work even in the presence of multiple record fields
 with the same name in scope.

 The point is that I see what you gain by allowing record fields to be
 referred to in a namespaced way, but I don't see what you gain by not
 allowing them to be referred to in a non-namespaced way. In theory you
 wouldn't care because the non-namespaced way is inferior anyways, but
 in practice because all existing Haskell code does it that way, it's
 significant.


My motivation for this entire change is simply to be able to use two record
with field members of the same name. This requires *not* generating
top-level functions to access record fields. I don't know if there is a
valid use case for the old top-level functions once switched over to the
new record system (other than your stated personal preference). We could
certainly have a pragma or something similar that generates top-level
functions even if the new record system is in use.


 
 
  All of that said, maybe having TDNR with bad syntax is preferable to
  not having TDNR at all. Can't it be extended to the existing syntax
  (of function application)? Or least some better one, which is ideally
  right-to-left? I don't really know the technical details...
 
  Generalized data-namespaces: Also think I'm opposed. This would import
  the problem from OO languages where functions written by the module
  (class) author get to have a distinguished syntax (be inside the
  namespace) over functions by anyone else (which don't).
 
 
  Maybe you can show some example code? To me this is about controlling
  exports of namespaces, which is already possible - I think this is
 mostly a
  matter of convenience.

 If I'm understanding correctly, you're suggesting we be able to write:

 data Data = Data Int where
twice (Data d) = 2 * d
thrice (Data d) = 3 * d
...

 and that if we write 'let x = Data 7 in x.thrice' it would evaluate to
 21. I have two objections.

 The first is the same as with the TDNR proposal: you would have both
 code that looks like
 'data.firstFunction.secondFunction.thirdFunction', as well as the
 existing 'thirdFunction $ secondFunction $ firstFunction data' and
 'thirdFunction . secondFunction . firstFunction $ data', and if you
 have both of them in the same expression (which you will) it becomes
 unpleasant to read because you have to read them in opposite
 directions.


This would not be possible because the functions can only be accessed from
the namespace - you could only use the dot (or T.firstFunction). It is
possible as per your complaint below:



 The second is that only the author of the datatype could put functions
 into its namespace; the 'data.foo' notation would only be available
 for functions written by the datatype's 

Re: Records in Haskell

2012-01-08 Thread Ingo Wechsung
2012/1/8 Gábor Lehel illiss...@gmail.com

 2012/1/8 Ingo Wechsung ingo.wechs...@googlemail.com:
 
 
  2012/1/8 Gábor Lehel illiss...@gmail.com
 
 
  The second is that only the author of the datatype could put functions
  into its namespace; the 'data.foo' notation would only be available
  for functions written by the datatype's author, while for every other
  function you would have to use 'foo data'. I dislike this special
  treatment in OO languages and I dislike it here.
 
 
  Please allow me to clarify as far as Frege is concerned.
  In Frege, this is not so, because implementations of class functions in
 an
  instance will be linked back to the  instantiated types name space. Hence
  one could do the following:
 
  module RExtension where
 
  import original.M(R)-- access the R record defined in module
 original.M
 
  class Rextension1 r where
firstNewFunction :: .
secondNewFunction :: .
 
  instance Rextension1 R where
   -- implementation for new functions
 
  And now, in another module one could
 
  import RExtension()  -- equivalent to qualified import in Haskell
 
  and, voilá, the new functions are accessible (only) through R

 Ah, I see. And that answers my other question as well about why you
 would special case class methods like this. Thanks. I think I prefer
 Disciple's approach of introducing a new keyword alongside 'class' to
 distinguish 'virtual record fields' (which get put in the namespace)
 from any old class methods (which don't). Otherwise the two ideas seem
 very similar. (While at the same time I still dislike the
 wrong-direction aspect of both.)


Yes, I can see your point here. OTOH, with the x.y.z notation the number of
parentheses needed can be reduced drastically at times.
In the end it's maybe a matter of taste. Frege started as a pure hobby
project (inspired by Simon Peyton-Jones' paper Practical type inference
for higher ranked types), but later I thought it may be interesting for OO
programmers (especially Java) because of the low entry cost (just download
a JAR, stay on JVM, etc.), and hence some aspects are designed so as to
make them feel at home. Ironically, it turned out that the most interest is
in the FP  camp, while feedback from the Java camp is almost 0. Never mind!

-- 
Mit freundlichen Grüßen
Ingo
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-01-08 Thread Gábor Lehel
2012/1/8 Greg Weber g...@gregweber.info:


 2012/1/8 Gábor Lehel illiss...@gmail.com

 
 
 
 
  Later on you write that the names of record fields are only accessible
  from the record's namespace and via record syntax, but not from the
  global scope. For Haskell I think it would make sense to reverse this
  decision. On the one hand, it would keep backwards compatibility; on
  the other hand, Haskell code is already written to avoid name clashes
  between record fields, so it wouldn't introduce new problems. Large
  gain, little pain. You could use the global-namespace function as you
  do now, at the risk of ambiguity, or you could use the new record
  syntax and avoid it. (If you were to also allow x.n syntax for
  arbitrary functions, this could lead to ambiguity again... you could
  solve it by preferring a record field belonging to the inferred type
  over a function if both are available, but (at least in my current
  state of ignorance) I would prefer to just not allow x.n for anything
  other than record fields.)
 
 
  Perhaps you can give some example code for what you have in mind - we do
  need to figure out the preferred technique for interacting with
  old-style
  records. Keep in mind that for new records the entire point is that they
  must be name-spaced. A module could certainly export top-level functions
  equivalent to how records work now (we could have a helper that
  generates
  those functions).

 Let's say you have a record.

 data Record = Record { field :: String }

 In existing Haskell, you refer to the accessor function as 'field' and
 to the contents of the field as 'field r', where 'r' is a value of
 type Record. With your proposal, you refer to the accessor function as
 'Record.field' and to the contents of the field as either
 'Record.field r' or 'r.field'. The point is that I see no conflict or
 drawback in allowing all of these at the same time. Writing 'field' or
 'field r' would work exactly as it already does, and be ambiguous if
 there is more than one record field with the same name in scope. In
 practice, existing code is already written to avoid this ambiguity so
 it would continue to work. Or you could write 'Record.field r' or
 'r.field', which would work as the proposal describes and remove the
 ambiguity, and work even in the presence of multiple record fields
 with the same name in scope.

 The point is that I see what you gain by allowing record fields to be
 referred to in a namespaced way, but I don't see what you gain by not
 allowing them to be referred to in a non-namespaced way. In theory you
 wouldn't care because the non-namespaced way is inferior anyways, but
 in practice because all existing Haskell code does it that way, it's
 significant.


 My motivation for this entire change is simply to be able to use two record
 with field members of the same name. This requires *not* generating
 top-level functions to access record fields. I don't know if there is a
 valid use case for the old top-level functions once switched over to the new
 record system (other than your stated personal preference). We could
 certainly have a pragma or something similar that generates top-level
 functions even if the new record system is in use.

Oh, in a sense you're right. If the top-level accessor functions are
treated as if they were defined by the module containing the record,
and there is more than one with the same name, the compiler would see
it as multiple definitions and indeed report an error. On the other
hand if they are treated as imported names (conceptually, implicitly
imported from the namespace of the record, say), then the compiler
would only report an error when you actually try to use the ambiguous
name. I had been assuming the latter case without realizing it. It
corresponds to what you have now if you have multiple records imported
with overlapping field names.

Again, exporting the field accessors to global scope and deferring any
errors from ambiguity or overlap to the point of their use would not
in any way interfere with the use of those same field accessors with
the namespaced syntax. If you only use the namespaced syntax, it would
work exactly as in your proposal: the top-level accessors are never
used so no ambiguity errors are reported. If you only use the
top-level syntax, then it works almost exactly as Haskell currently
does (except you can define multiple records with overlapping field
names in the same module as long as you don't use them, which I had
not considered). The set of well-formed programs if you allow
top-level access would be almost a superset of the set of well-formed
programs if you don't. (The exception is that top-level field
accessors would conflict with non-accessor plain old functions of the
same name, whereas if they weren't visible outside of the record's
namespace they wouldn't, but I don't feel like that's a huge concern.)



 
 
  All of that said, maybe having TDNR with bad syntax is preferable to
  not having 

Re: Records in Haskell

2012-01-08 Thread Matthew Farkas-Dyck
On 08/01/2012, Gábor Lehel illiss...@gmail.com wrote:
 2012/1/8 Greg Weber g...@gregweber.info:


 2012/1/8 Gábor Lehel illiss...@gmail.com

 Thank you. I have a few questions/comments.



 The module/record ambiguity is dealt with in Frege by preferring
 modules and requiring a module prefix for the record if there is
 ambiguity.

 I think I see why they do it this way (otherwise you can't refer to a
 module if a record by the same name is in scope), but on the other
 hand it would seem intuitive to me to choose the more specific thing,
 and a record feels more specific than a module. Maybe you could go
 that way and just not give your qualified imports the same name as a
 record? (Unqualified imports are in practice going to be hierarchical,
 and no one's in the habit of typing those out to disambiguate things,
 so I don't think it really matters if qualified records shadow them.)


 In the case where a Record has the same name as its containing module it
 would be more specific than a module, and preferring it makes sense. I
 think
 doing this inside the module makes sense, as one shouldn't need to refer
 to
 the containing module's name. We should think more about the case where
 module  records are imported.



 Expressions of the form x.n: first infer the type of x. If this is
 just an unbound type variable (i.e. the type is unknown yet), then
 check if n is an overloaded name (i.e. a class operation). [...] Under
 no circumstances, however, will the notation x.n contribute in any way
 in inferring the type of x, except for the case when n is a class
 operation, where an appropriate class constraint is generated.

 Is this just a simple translation from x.n to n x? What's the
 rationale for allowing the x.n syntax for, in addition to record
 fields, class methods specifically, but no other functions?


 It is a simple translation from x.n to T.n x
 The key point being the function is only accessible through the record's
 namespace.
 The dot is only being used to tap into a namespace, and is not available
 for
 general function application.

 I think my question and your answer are walking past each other here.
 Let me rephrase. The wiki page implies that in addition to using the
 dot to tap into a namespace, you can also use it for general function
 application in the specific case where the function is a class method
 (appropriate class constraint is generated etc etc). I don't
 understand why. Or am I misunderstanding?






 Later on you write that the names of record fields are only accessible
 from the record's namespace and via record syntax, but not from the
 global scope. For Haskell I think it would make sense to reverse this
 decision. On the one hand, it would keep backwards compatibility; on
 the other hand, Haskell code is already written to avoid name clashes
 between record fields, so it wouldn't introduce new problems. Large
 gain, little pain. You could use the global-namespace function as you
 do now, at the risk of ambiguity, or you could use the new record
 syntax and avoid it. (If you were to also allow x.n syntax for
 arbitrary functions, this could lead to ambiguity again... you could
 solve it by preferring a record field belonging to the inferred type
 over a function if both are available, but (at least in my current
 state of ignorance) I would prefer to just not allow x.n for anything
 other than record fields.)


 Perhaps you can give some example code for what you have in mind - we do
 need to figure out the preferred technique for interacting with old-style
 records. Keep in mind that for new records the entire point is that they
 must be name-spaced. A module could certainly export top-level functions
 equivalent to how records work now (we could have a helper that generates
 those functions).

 Let's say you have a record.

 data Record = Record { field :: String }

 In existing Haskell, you refer to the accessor function as 'field' and
 to the contents of the field as 'field r', where 'r' is a value of
 type Record. With your proposal, you refer to the accessor function as
 'Record.field' and to the contents of the field as either
 'Record.field r' or 'r.field'. The point is that I see no conflict or
 drawback in allowing all of these at the same time. Writing 'field' or
 'field r' would work exactly as it already does, and be ambiguous if
 there is more than one record field with the same name in scope. In
 practice, existing code is already written to avoid this ambiguity so
 it would continue to work. Or you could write 'Record.field r' or
 'r.field', which would work as the proposal describes and remove the
 ambiguity, and work even in the presence of multiple record fields
 with the same name in scope.

 The point is that I see what you gain by allowing record fields to be
 referred to in a namespaced way, but I don't see what you gain by not
 allowing them to be referred to in a non-namespaced way. In theory you
 wouldn't care because the non-namespaced way is 

Re: 7.4.1-pre: Show Integral

2012-01-08 Thread wren ng thornton

On 12/22/11 2:28 PM, J. Garrett Morris wrote:

2011/12/22 Edward Kmettekm...@gmail.com:

The change, however, was a deliberate _break_ with the standard that
passed through the library review process a few months ago, and is now
making its way out into the wild.


Is it reasonable to enquire how many standard-compliant implementations
of Haskell there are?


I believe the answer is (or on release of 7.4 will become) zero, unless 
UHC is fully compliant. I seem to recall that GHC already had other 
infelicities wrt the report, unless those had been fixed when I wasn't 
looking.


However, this is (to some extent) inevitable, because the haskell' 
process desires that things be already implemented before they are 
considered for inclusion in the new standard. IIRC, the desire to 
explicitly break from h2010 in this regard is as a preamble to getting 
the change into h2012 or h2013. Unfortunately, due to how typeclasses 
are defined there's no way to simultaneously implement the current 
standard and the desired new standard in such a way that the two will be 
able to interact (instead of duplicating all intersecting code so as to 
compile separately against both standards).


While the requirement to state Eq and Show is a burden wrt the old 
standard, it is fully compatible with it.


--
Live well,
~wren

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Unit unboxed tuples

2012-01-08 Thread wren ng thornton

On 12/23/11 8:34 AM, Simon Peyton-Jones wrote:

More uniform!  If you the singleton-unboxed-tuple data constructor in source code, 
as a function, you'd write (\x -  (# x #)).   In a pattern, or applied, you'd 
write (# x #).


Shouldn't (# T #) be identical to T?

I know that a putative (T) would be different from T because it would 
introduce an additional bottom, but I don't think that would apply to 
the unboxed case. Or is there something in the semantics of unboxed 
tuples that I'm missing?


--
Live well,
~wren

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Unit unboxed tuples

2012-01-08 Thread wren ng thornton

On 12/23/11 12:57 PM, Tyson Whitehead wrote:

On December 23, 2011 09:37:04 Ganesh Sittampalam wrote:

On 23/12/2011 13:46, Ian Lynagh wrote:

On Fri, Dec 23, 2011 at 01:34:49PM +, Simon Peyton-Jones wrote:

Arguments   Boxed  Unboxed
3   ( , , )(# , , #)
2   ( , )  (# , #)
1
0   () (# #)


It's worth mentioning that if you want to write code that's generic over
tuples in some way, the absence of a case for singletons is actually a
bit annoying - you end up adding something like a One constructor to
paper over the gap. But I can't think of any nice syntax for that case
either.


I believe python uses (expr,) (i.e., nothing following the ,) to distinguish a
singelton tupple from a braced term.  Not great, but possibly not that bad.

The other option you could do is introduce another unambiguous brace symbol
for tupples.  The new symbol would be optional except for the singelton.

(- expr, expr, expr -)  =  (expr, expr, expr)
(- expr, expr -)  =  (expr, expr)
(- expr -)  =unable to express
(- -)  =  ()


An alternative is to distinguish, say, (# x #) and its spaceful 
constructor (# #) from the spaceless (##); and analogously for the boxed 
tuples, though that introduces confusion about parentheses for boxing vs 
parentheses for grouping.


FWIW, I'd always thought that () disallowed intervening spaces, though 
ghci tells me that ain't so.


--
Live well,
~wren

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ConstraintKinds and default associated empty constraints

2012-01-08 Thread wren ng thornton

On 1/8/12 8:32 AM, Bas van Dijk wrote:

On 23 December 2011 17:44, Simon Peyton-Jonessimo...@microsoft.com  wrote:

My attempt at forming a new understanding was driven by your example.

class Functor f where
type C f :: * -  Constraint
type C f = ()

sorry -- that was simply type incorrect.  () does not have kind *  -
Constraint


So am I correct that the `class Empty a; instance Empty a` trick is
currently the only way to get default associated empty constraints?


Couldn't the following work?

class Functor f where
type C f :: * - Constraint
type C f _ = ()

It seems to me that adding const to the type level (either implicitly or 
explicitly) is cleaner and simpler than overloading () to be Constraint, 
*-Constraint, *-*-Constraint,...


--
Live well,
~wren

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-01-08 Thread wren ng thornton

On 12/28/11 1:34 PM, Donn Cave wrote:

Quoth Greg Weberg...@gregweber.info,

On Wed, Dec 28, 2011 at 2:12 PM, Donn Caved...@avvanta.com  wrote:

...

I would think row polymorphism is a must-have.



Perhaps if you want *extensible* records. If you would like to make some
progress with records in the near future rather than keeping records in
limbo, I think we really need to give up for the moment on any higher form
of abstraction than straight-forward name-spacing.


No, to be clear on that, I haven't given much thought to extensibility
per se, I was thinking row polymorphism is a valuable feature on its own,
and extensibility just seemed to me to be an implicit side benefit.


Yes, row polymorphism would still be helpful in lack of extensible 
records. In particular it allows for easily determining principle 
(structural) types of records; this is desirable from a generic 
programming perspective even if records on the whole are not 
structurally typed.


That is, we can distinguish the following types

data Foo = MkFoo { x :: T }

data Bar = MkBar { x :: T }

By considering them to desugar into

type Foo = { __type :: Foo , x :: T }
constructor MkFoo :: T - Foo
pattern MkFoo :: T - Pattern

type Bar = { __type :: Bar , x :: T }
constructor MkBar :: T - Bar
pattern MkBar :: T - Pattern

accessor x :: { x :: T , ... } - x

Of course, an actual implementation needn't come up with a phantom 
argument like __type in order to nominally distinguish structurally 
identical types. Rather, the existence of the hack shows that it's doable.


--
Live well,
~wren

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-01-08 Thread wren ng thornton

On 12/30/11 10:58 PM, Matthew Farkas-Dyck wrote:

On 30/12/2011, Andriy Polischukquux...@gmail.com  wrote:

Consider this example:
quux (y . (foo.  bar).baz (f . g)) moo
It's not that easy to distinguish from
quux (y . (foo.  bar) . baz (f . g)) moo


Yeah, that's why I dislike dot as compose operator (^_~)


Me too. Though I've been told repeatedly that we're in the losing camp :(

Given that we want to apply selectors to entire expressions, it seems 
more sensible to consider the selector syntax to be a prefix onto the 
selector name. Thus, the selector would be named .baz (or :baz, 
#baz, @baz,...), and conversely any name beginning with the special 
character would be known to be a selector. Therefore, a space preceding 
the special character would be optional, while spaces following the 
special character are forbidden. This has a nice analogy to the use of 
: as a capital letter for symbolic names: function names beginning 
with the special character for record selectors just indicate that they 
are postfix functions with some mechanism to handle overloading (whether 
that be TDNR or whathaveyou).


--
Live well,
~wren

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ConstraintKinds and default associated empty constraints

2012-01-08 Thread Antoine Latter
On Sun, Jan 8, 2012 at 11:21 PM, wren ng thornton w...@freegeek.org wrote:


 Couldn't the following work?


    class Functor f where
        type C f :: * - Constraint
        type C f _ = ()


I get a parse error from that.

The equivalent:

   class Functor f where
   type FC f :: * - Constraint
   type FC f a = ()

gives the error:

Number of parameters must match family declaration; expected 1
In the type synonym instance default declaration for `FC'
In the class declaration for `Functor'

 It seems to me that adding const to the type level (either implicitly or
 explicitly) is cleaner and simpler than overloading () to be Constraint,
 *-Constraint, *-*-Constraint,...

 --
 Live well,
 ~wren


 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ConstraintKinds and default associated empty constraints

2012-01-08 Thread Antoine Latter
On Mon, Jan 9, 2012 at 12:30 AM, Antoine Latter aslat...@gmail.com wrote:
 On Sun, Jan 8, 2012 at 11:21 PM, wren ng thornton w...@freegeek.org wrote:


 Couldn't the following work?


    class Functor f where
        type C f :: * - Constraint
        type C f _ = ()


 I get a parse error from that.

 The equivalent:

   class Functor f where
       type FC f :: * - Constraint
       type FC f a = ()


The definitions are accepted by GHC:

   class Functor f where
   type FC f a :: Constraint
   type FC f a = ()

   fmap :: (FC f a, FC f b) = (a - b) - f a - f b

   instance Functor [] where
   fmap = map

But I don't like the 'a' being an index parameter, and then the
following expression:

   fmap (+1) [1::Int]

Gives the error:

Could not deduce (FC [] Int) arising from a use of `fmap'
In the expression: fmap (+ 1) [1 :: Int]
In an equation for `it': it = fmap (+ 1) [1 :: Int]

 gives the error:

    Number of parameters must match family declaration; expected 1
    In the type synonym instance default declaration for `FC'
    In the class declaration for `Functor'

 It seems to me that adding const to the type level (either implicitly or
 explicitly) is cleaner and simpler than overloading () to be Constraint,
 *-Constraint, *-*-Constraint,...

 --
 Live well,
 ~wren


 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ConstraintKinds and default associated empty constraints

2012-01-08 Thread Bas van Dijk
That would be nice. It would also be nice to be able to use _ in type
signatures as in:

const :: a - _ - a
const x _ = x

During type checking each _ could be replaced by a new unique type
variable. Visa versa should also be possible: during type inferencing each
unique type variable could be replaced by a _.

Bas
On Jan 9, 2012 6:22 AM, wren ng thornton w...@freegeek.org wrote:

 On 1/8/12 8:32 AM, Bas van Dijk wrote:

 On 23 December 2011 17:44, Simon 
 Peyton-Jonessimonpj@**microsoft.comsimo...@microsoft.com
  wrote:

 My attempt at forming a new understanding was driven by your example.

 class Functor f where
type C f :: * -  Constraint
type C f = ()

 sorry -- that was simply type incorrect.  () does not have kind *  -
 Constraint


 So am I correct that the `class Empty a; instance Empty a` trick is
 currently the only way to get default associated empty constraints?


 Couldn't the following work?

class Functor f where
type C f :: * - Constraint
type C f _ = ()

 It seems to me that adding const to the type level (either implicitly or
 explicitly) is cleaner and simpler than overloading () to be Constraint,
 *-Constraint, *-*-Constraint,...

 --
 Live well,
 ~wren

 __**_
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.**org Glasgow-haskell-users@haskell.org
 http://www.haskell.org/**mailman/listinfo/glasgow-**haskell-usershttp://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ConstraintKinds and default associated empty constraints

2012-01-08 Thread Andres Löh
Hi.

 The definitions are accepted by GHC:

   class Functor f where
       type FC f a :: Constraint
       type FC f a = ()

       fmap :: (FC f a, FC f b) = (a - b) - f a - f b

   instance Functor [] where
       fmap = map

Yes. This is what I would have expected to work.

 But I don't like the 'a' being an index parameter, and then the
 following expression:

   fmap (+1) [1::Int]

 Gives the error:

    Could not deduce (FC [] Int) arising from a use of `fmap'
    In the expression: fmap (+ 1) [1 :: Int]
    In an equation for `it': it = fmap (+ 1) [1 :: Int]

 gives the error:

    Number of parameters must match family declaration; expected 1
    In the type synonym instance default declaration for `FC'
    In the class declaration for `Functor'

I get the same error, but it looks like a bug to me: If I move the
declaration

type FC f a = ()

to the instance, then the example passes.

Cheers,
  Andres

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users