Re: Records in Haskell: Explicit Classy Records

2012-04-23 Thread Strake
On 22/04/2012, AntC anthony_clay...@clear.net.nz wrote:
 Matthew Farkas-Dyck strake888 at gmail.com writes:

 I made another proposal for records in Haskell, meant to solve just
 the namespace problem, and no more.
 http://hackage.haskell.org/trac/ghc/wiki/Records/ExplicitClassyRecords


 Thanks Matthew, I'm finding your wiki too sketchy to follow.

 What name(s) does your proposal generate? And what type(s) do they have?

Ahh, this is partly the beauty of it — it generates no names. All
names are user-declared. The system generates only instances.

 For record access, how does the compiler resolve to the right definition or
 instance from the context of use?

Just as it would any other. It's simply a type class.

 It seems you aren't making any proposal about record updating. So the acid
 test is how do you expect this to be treated:
  e{ x = True }
 {* `e` is some arbitrary expression, possibly denoting a record type;
`x` one of those names for which you've solved the namespace problem
 *}

Ah, sorry; I added this to the wiki.

Cheers,
strake

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell: Explicit Classy Records

2012-04-22 Thread AntC
Matthew Farkas-Dyck strake888 at gmail.com writes:

 I made another proposal for records in Haskell, meant to solve just
 the namespace problem, and no more.
 http://hackage.haskell.org/trac/ghc/wiki/Records/ExplicitClassyRecords
 

Thanks Matthew, I'm finding your wiki too sketchy to follow.

What name(s) does your proposal generate? And what type(s) do they have?

For record access, how does the compiler resolve to the right definition or 
instance from the context of use?

It seems you aren't making any proposal about record updating. So the acid 
test is how do you expect this to be treated:
 e{ x = True }
{* `e` is some arbitrary expression, possibly denoting a record type;
   `x` one of those names for which you've solved the namespace problem*}

AntC





___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell - updating Higher-Ranked fields

2012-03-27 Thread AntC
Gábor Lehel illissius at gmail.com writes:
 
 ..., but DORF actually requires less type system magic than
 SORF, and also already has a working prototype implementation, ...
 
 ... My main complaint against DORF is
 that having to write fieldLabel declarations for every field you want
 to use is onerous. ...
 

(Here's a possible way to reduce the need for declarations,
 for non-sharing fields; also avoid the fieldLabel new decl.)

SPJ introduced some fancy type-handling for higher-ranked fields. But still it 
only helped in accessing a H-R field, not updating it.

If we accept that updating H-R fields in liberated-namespace records is not 
possible, can we simplify the implementation? I think so. Here's what I'd call 
a well-principled work-round (not a hack, but then neither a solution).

(I'm showing it in DORF-like style, but I think it would work equally in SORF 
style with Stringy Kinds. The advantage of showing DORF is that we can try it -
- which I've done.)

You write:
data HR = HR{ objectId :: ObjectId  -- type Pun
, rev :: forall a. [a] - [a] } -- SPJ's example
 sharing (ObjectId) deriving (...)  -- new syntax
 -- this decl is not sharing `rev`, so no further code needed

 -- HR is sharing objectId, so you need a field Label in scope:
 -- probably you're already declaring newtypes/data
newtype ObjectId = ObjectId Int -- declaring a field
  deriving (Has, ... )  -- `Has` makes it a label

Field access can be polymorphic:
f :: HR - ([Bool], [Char])
f r = (r.rev [True], r.rev hello)

Record update looks like:
... myHR{ rev = Rev reverse } ...   -- annoying pun
But perhaps we could support sugar for it:
... myHR{ Rev reverse } ... -- fewer keystrokes!

The HR decl desugars to:
newtype Rev = Rev (forall a. [a] - [a])-- newtype punning
data HR = HR{ objectId :: ObjectId, rev :: Rev }
rev :: HR - (forall a. [a] - [a]) -- monotype field selector
rev r = let (Rev fn) = get r (undefined :: Rev) in fn
instance Has HR Revwhere
get HR{ rev } _ = rev   -- have to wrap the fn
set (Rev fn) HR{ .. } = HR{ rev = (Rev fn) }
type instance FieldTy HR Rev = Rev  -- have to wrap this
-- else update don't work

So I've simplified `Has` to two type arguments (like the more naieve form SPJ 
considers then rejects). And used the field's type itself as the index (so 
that we're punning on the field name):
class Has r fld   where
get :: r - fld - FieldTy r fld
set :: fld - r - SetTy r fld  -- for type change update

For the record type's `sharing` fields we desugar to:
instance Has HR ObjectId   where
get HR{ objectId } _ = objectId -- yeah dull, dull
set x HR{ .. }   = HR{ objectId = x, .. }

and the `deriving (Has)` on the newtype or data desugars to:
objectId :: {Has r ObjectId} = r - ObjectId   -- overloaded record 
objectId r = get r (undefined :: ObjectId)  -- selector
type instance FieldTy r ObjectId = ObjectId -- not parametric

AntC


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-08 Thread AntC
 
 On Fri, Mar 02, 2012 at 01:04:13AM +, AntC wrote:
  
  Let me explain better what I mean by two private namespaces, then we'll 
   try to understand how your proposal goes ...
 

Folks, it has been very difficult keeping up with all the twists and turns of 
this thread.

So here's a summary of my proposal, by popular demand, 
http://hackage.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFields
#ThumbnailSketch

Reminder about this approach:
 * It requires no new semantics from Haskell (beyond ghc v7.2 extensions).
 * Therefore a prototype can be developed,
   and it has! you can download and see for yourself.
 * The syntax for record updates is the same as it is today.
   (Although the semantics is polymorphic, not monomorphic.)
 * The syntax for data declarations is the same as it is today.
 * The syntax for field selection is the same as it is today.
   (But polymorphic for any record with a field of that name.)
 * There is a dot notation syntax possible for field selection,
   (strictly optional, sugar only).
 * It does support type-changing update to records,
   for parametric polymorphic fields and records.

But:
 * This approach does not have a solution for updating fields of
   higher-rank types.
   (In that respect it is like SORF. This is the remaining problem.
And I suspect that any solution for SORF will equally work for DORF.)

There were some criticisms about how much boilerplate this approach would 
need. There are some ideas/suggestions to improve that. In particular 
see Option Three: Mixed In-situ and Declared ORF further down the page from 
the Thumbnail Sketch.

Constructive feedback welcome.

AntC




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell: Type-Indexed Records (another proposal)

2012-03-06 Thread AntC
Matthew Farkas-Dyck strake888 at gmail.com writes:

 
  I think you don't want all those type vars in your record decls -- but only
  vars for the mutatable types, like this:
 
type R c = { X ::. Int, Y::. String, Z ::. c, ... }
 
  Then you don't need a Quasifunctor instance for every field, only the
  mutatable ones.
 
 Yes, I know. That is just a very general example.
 
  Oh, and how do you deal with multiple record constructors as in H98:
 data T a = T1 { x :: a, y :: Bool }
  | T2 { x :: a }
 
 Not sure what you mean. With an argument of such a multiconstructed
 type, I would do as ever in Haskell: pattern-match.

So please show what record declarations look like. And how they get turned 
into Has instances. Your example decl on the wiki for R a b c is not valid 
Haskell. What's more the wiki hss it as a `type`. Did you mean a `data`? I'm 
confused.

 
  You don't give full details for your Has instances, but presumably you'd do
  the same equality constraint style as SORF and DORF.
 
 I assume you mean
 instance (v~a) = Has k v (R a) where ...
 
 I'm not sure why we need this, but I assume that we do, since it was
 written by SPJ, so yes.
 

Matthew, you really, really need to understand why SPJ put it that way, if you 
want your proposal to be taken seriously. He wasn't just making it up.

  You don't show how you'd do record update.
 
 Yep. It's on the wiki.
 qfmap X f r is r mutated by f at X
 

So do you mean this is what developers put in the code?

  what is the type for:
  r{ X = True }
  That is: update record r, set its X field to True.
 
 This is written as
 qfmap X (const True) (r :: r) :: Quasifunctor X a Bool r s = s;
 

You mean this is what to put in the code?

DORF is getting beaten up for the amount of boilerplate the programmer is 
expected to add (for fieldLabels, etc.) I can't compare apples with apples for 
your proposal, because I can't see what the code looks like that would appear 
in the program.

So far (apart from Quasifunctor) all I can see is that you're varying the 
sugar, without adding anything to the semantics -- except you've not given the 
surface syntax, so I'm only guessing.


AntC


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell: Type-Indexed Records (another proposal)

2012-03-05 Thread Matthew Farkas-Dyck
On 03/03/2012, AntC anthony_clay...@clear.net.nz wrote:
 Apart from the Quasifunctor bit, I think you'll find your proposal is a rather
 cut-down version of DORF, just using different syntactic sugar.

 (Oh, and with the arguments to Has in a different order, just to be 
 confusing.)

Not so. I chose this order to make it easier to curry.

 You do have the equivalent of fieldLabel decls. Those are all your type
 indexes: data X = X, etc.

True, but data is not a new keyword.

 And you suggest defining
 x = X

We can define x = X, if we wish, but we need not; we could rather
define x as a selector. It's just that lower-case labels are customary
in Haskell.

 Which is equivalent to DORF mapping from field name `x` to phantom type
 Proxy_x, (but DORF keeps `x` as a field selector function, similar to H98).

Ah, not quite. In DORF, the phantom type is an implicit, magical type,
but in TIR it's an explicit, declared type.

In DORF, either the magical type is in scope, or not; in the former
case, it might clash with a user-defined type, and in the latter, if I
wish to call set, how shall I type its argument?

In TIR, the key type is user-defined, so if there be a clash, then the
user is at fault.

 To make `x` a selector function instead, you'd go:
 x = (.) X   -- or probably x = get X, see below
 Which is exactly the same as DORF (after adjusting for the different order
 of
 arguments).

True.

 And presumably instead of X you'd want a LongandMeaningfulLabel?

No! Real Programmers never choose such names!

I jest. Yes, plainly, I would. X is just an example.

 And if your
 data Customer_id = Customer_id
 was always an Int field, wouldn't it help the reader and the compiler to say
 that? (That's the main extra part in fieldLabels.)

It might help the reader, but so would a simple comment. Nevertheless,
this is fair.

It might help the compiler, but that's an argument by premature
optimization, I think (^_~)

 I think you don't want all those type vars in your record decls -- but only
 vars for the mutatable types, like this:

   type R c = { X ::. Int, Y::. String, Z ::. c, ... }

 Then you don't need a Quasifunctor instance for every field, only the
 mutatable ones.

Yes, I know. That is just a very general example.

 Oh, and how do you deal with multiple record constructors as in H98:
data T a = T1 { x :: a, y :: Bool }
 | T2 { x :: a }

 It wouldn't work to have a different record type for each constructor, 'cos
 you'd turn functions that use them from mono to polymorphic (overloaded --
 needing a class and instances).

Not sure what you mean. With an argument of such a multiconstructed
type, I would do as ever in Haskell: pattern-match.

 You don't give full details for your Has instances, but presumably you'd do
 the same equality constraint style as SORF and DORF.

I assume you mean
instance (v~a) = Has k v (R a) where ...

I'm not sure why we need this, but I assume that we do, since it was
written by SPJ, so yes.

 I think you still need method get and sugar to turn the dot notation into a
 call to get. Having method (.) will usurp altogether dot as function
 composition -- you'll make a lot of enemies! And we need tight binding for
 dot
 notation, so we might as well treat it as special syntax.

Not need. (.) is quite a valid name. Nevertheless, this is fair. I
meant dot as an example (though one that might ultimately be chosen).
I like bang, myself; others seem to favour get. The trouble is, in the
latter case, that we'd need to change certain widely-used libraries...

 You don't show how you'd do record update.

Yep. It's on the wiki.
qfmap X f r is r mutated by f at X

 The litmus test is what is the
 type
 for:
 r{ X = True }
 That is: update record r, set its X field to True.

This is written as
qfmap X (const True) (r :: r) :: Quasifunctor X a Bool r s = s;

 AntC

Cheers,
strake

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell: Type-Indexed Records (another proposal)

2012-03-03 Thread AntC
Matthew Farkas-Dyck strake888 at gmail.com writes:

 
 Hello all.
 
 I wrote a new proposal for the Haskell record system. It can be found
 at http://hackage.haskell.org/trac/ghc/wiki/Records/TypeIndexedRecords
 
 Records are indexed by arbitrary Haskell types. Scope is controlled as
 scope of key types. No fieldLabel declarations needed (as in DORF).
 
 Cheers,
 strake
 

Thanks Matthew,

It's good to explore the design space.

Apart from the Quasifunctor bit, I think you'll find your proposal is a rather 
cut-down version of DORF, just using different syntactic sugar. (Oh, and with 
the arguments to Has in a different order, just to be confusing.)

You do have the equivalent of fieldLabel decls. Those are all your type 
indexes: data X = X, etc.

And you suggest defining
x = X

Which is equivalent to DORF mapping from field name `x` to phantom type 
Proxy_x, (but DORF keeps `x` as a field selector function, similar to H98).

To make `x` a selector function instead, you'd go:
x = (.) X   -- or probably x = get X, see below
Which is exactly the same as DORF (after adjusting for the different order of 
arguments).

And presumably instead of X you'd want a LongandMeaningfulLabel?

And if your 
data Customer_id = Customer_id
was always an Int field, wouldn't it help the reader and the compiler to say 
that? (That's the main extra part in fieldLabels.)

I think you don't want all those type vars in your record decls -- but only 
vars for the mutatable types, like this:

  type R c = { X ::. Int, Y::. String, Z ::. c, ... }

Then you don't need a Quasifunctor instance for every field, only the 
mutatable ones.

Oh, and how do you deal with multiple record constructors as in H98:
   data T a = T1 { x :: a, y :: Bool }
| T2 { x :: a }

It wouldn't work to have a different record type for each constructor, 'cos 
you'd turn functions that use them from mono to polymorphic (overloaded -- 
needing a class and instances).

You don't give full details for your Has instances, but presumably you'd do 
the same equality constraint style as SORF and DORF.

I think you still need method get and sugar to turn the dot notation into a 
call to get. Having method (.) will usurp altogether dot as function 
composition -- you'll make a lot of enemies! And we need tight binding for dot 
notation, so we might as well treat it as special syntax.

You don't show how you'd do record update. The litmus test is what is the type 
for:
r{ X = True }
That is: update record r, set its X field to True.

AntC




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-02 Thread AntC
Isaac Dupree ml at isaac.cedarswampstudios.org writes:

...
  So under your proposal, a malicious client could guess at the fieldnames in
  your abstraction, then create their own record with those fieldnames as
  SharedFields, and then be able to update your precious hidden record type.
 
 Show me how a malicious client could do that.  Under DORF plus my 
 mini-proposal,
 
 module Abstraction (AbstractData) where
...
 fieldLabel field1 --however it goes
 [code]

Isaac here's a suggestion: please write up your proposal on a wiki.

Don't expect that your readers will have bothered to read any of your posts, 
not even those a few back in the thread.

Don't expect they'll have read the DORF proposal, or the SORF, or TDNR, nor 
even the front page of the Records wiki.

Don't expect anybody will believe your claims unless you produce a prototype 
showing how you translate your code into legal Haskell.

Don't expect anybody will believe your prototype unless it has meaningful 
field names and is illustrating a realistic business application.

Once people look at your code or wiki, don't expect they'll get your syntax 
right: you'll have to explain that from scratch.

Don't expect they'll even bother to get this right:
 fieldLabel field1 --however it goes

Don't expect they'll understand the difference between a polymorphic record 
system vs. the narrow namespacing issue - in fact expect them to make all 
sorts of suggestions for polymorphic record systems.

Don't expect they'll try running the prototype code you laboured so hard to 
get working.

Do expect to get a heap of requests for clarifications, which you also put up 
on to the wiki so that it grows and grows -- even to explain things which you 
thought were pretty obvious.

Do expect to explain the standard Haskell behaviour that you have not changed. 
It's not enough to say This follows standard Haskell behaviour. Do expect to 
find your wiki page growing and growing.

Do expect to get a load of posts starting I haven't followed everything, ... 
or It's a while since I 'tuned out' of the Records thread, ... and wanting 
you to explain all the bits they could read for themselves on the wiki.

Then expect they'll pick a few words out of your summary and lambast you for 
it, even though you politely requested they read the wiki to get the full 
story (and which they clearly did not do).

Throughout all this do expect to remain patient, civil and polite.

Do not expect to have a social life or get much sleep. Do expect your wife to 
ask who you're writing to, and why.


AntC



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-02 Thread AntC
Gábor Lehel illissius at gmail.com writes:

 ...
 
 ... My main complaint against DORF is
 that having to write fieldLabel declarations for every field you want
 to use is onerous. If that could be solved, I don't think there are
 any others. (But even if it can't be, I still prefer DORF.)
 

Thank you Gábor, I understand that 'complaint'.

I have been trying to keep the design 'clean': either the module is totally 
DORF, or it's totally H98.

But I've also tried to conform to H98 style where possible. So:
* DORF field selectors are just functions, like H98 field selector functions.
* dot syntax is just reverse apply, so could be used for H98 selectors
* pattern syntax still works, and explicit record constructor syntax
   (relying on DisambiguateRecordFields)
* record update syntax is the same (but with a different desugarring)
* record decl syntax is the same
  (but desugars to a Has instance, instead of a function)

There have been several suggestions amongst the threads to mix H98-style 
fields with DORF-style records (or perhaps I mean vice-versa!):
* We'd need to change the record decl syntax to 'flag' DORF fields (somehow).
* H98 fields desugar to monomorphic field selector functions, as usual.
  So if you have more than one in scope, that's a name clash.
* DORF fields desugar to Has instances.
  (providing you've declared the fieldLabel somewhere)
  Perhaps we could take advantage of knowing it's DORF
   to pick up the field type from the fieldLabel decl?

I think you could then 'mix and match' DORF and H98 fields in your expressions 
and patterns (that was certainly part of my intention in designing DORF).

There's one difficulty I can see:
* record update would have to know which sort of field it was updating in:
r{ fld = expr }
  If `fld` is DORF, this desugars to a call to `set`.
  If H98, this code stands as is.
What about:
r{ fldH98 = expr1, fldDORF = expr2, fldH983 = expr3, fldDORF4 = expr4 }
I think:
* for DORF updates `set` can only go one field at a time,
  so it turns into a bunch of nested `set`s
  (One for fldDORF, inside one for fldDORF4.)
* for H98 it can do simultaneous, so in effect we go:
  let r' = r{ fldDORF = expr2, fldDORF4 = expr4 }   -- desugar to nested
in r'{ fldH98 = expr1, fldH983 = expr3 }

Remaining question: how do we tell a DORF field from a H98,
at the point of the record update expression?
What is the difference? Find the field selector in the environment from the 
name:
- if monomorphic, it's H98
- if overloaded, it's DORF

But! but! we don't know its type until the type inference phase.
Yet we need to desugar the syntax at the syntax phase(!)

Suggestions please!


Also an obfuscation factor: perversely, the record type and field labels might 
have been exported, but not the selector function.



AntC




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-02 Thread AntC
Isaac Dupree ml at isaac.cedarswampstudios.org writes:

 
 
  In the meantime, I had an idea (that could work with SORF or DORF) :
 
  data Foo = Foo { name :: String } deriving (SharedFields)
 
  The effect is: without that deriving, the declaration behaves just
  like H98.
 
  Thanks Isaac, hmm: that proposal would work against what DORF is trying to 
do.
 
  What you're not getting is that DORF quite intentionally helps you hide the
  field names if you don't want your client to break your abstraction.
 
  So under your proposal, a malicious client could guess at the fieldnames in
  your abstraction, then create their own record with those fieldnames as
  SharedFields, and then be able to update your precious hidden record type.
 
 Show me how a malicious client could do that.  Under DORF plus my 
 mini-proposal,
 
 module Abstraction (AbstractData) where
 data AbstractData = Something { field1 :: Int, field2 :: Int }
 ...
 --break abstraction how? let's try...
 
 module Client1 where
 import Abstraction
 data Breaker = Something { field1 :: Int } deriving (SharedFields)
 -- compile fails because there are no field-labels in scope

Correct that the fieldLabel is not in scope, so that compile will fail; but 
what price did you pay?

Hint: what did you import with `Abstraction`?
Answer: you did not import `field1` selector function, nor the mechanism 
behind it.

So in module Client1 you can't access the `field1` content of a record type 
AbstractData. 

OK, that's sometimes something you want: to be able to pass around records of 
a specific type without allowing the client to look inside them at all.

But I was talking about the more common requirement for encapsulation. I want 
to control access to my record type: the client can read (certain) fields, but 
not update them. Other fields I don't want the client to even know about. 
(You've achieved the last part with your Client1, for all of the fields.)

(FYI: that's how wiki pages turn out so long; specifying exactly all the ins 
and outs at that sort of subtle detail.)

AntC




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-02 Thread AntC
AntC anthony_clayden at clear.net.nz writes:

 
 Gábor Lehel illissius at gmail.com writes:
 
  ...
  
  ... My main complaint against DORF is
  that having to write fieldLabel declarations for every field you want
  to use is onerous. If that could be solved, I don't think there are
  any others. (But even if it can't be, I still prefer DORF.)
  
 
 Thank you Gábor, I understand that 'complaint'.
 
 I have been trying to keep the design 'clean': either the module is totally 
 DORF, or it's totally H98.
 
 ...
 There have been several suggestions amongst the threads to mix H98-style 
 fields with DORF-style records (or perhaps I mean vice-versa!):
 * We'd need to change the record decl syntax to 'flag' DORF fields (somehow).
 ...
 There's one difficulty I can see:
 ...
 
 Suggestions please!
 

Wow! well thank you for all that hard thought going into my question.

I've put up a tweak to the proposal as Option Three: Mixed In-situ and 
Declared ORF.

This does _not_ re-introduce H98 style fields, but does simulate them in a way 
that fits better with DORF.

Do I dub this MIDORF? How will the cat with the hariballs pronounce it ;-)?

[Oh, and sorry Isaac: the word count on the wiki has gone up some more.]

AntC



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-01 Thread AntC
J. Garrett Morris jgmorris at cs.pdx.edu writes:

 
 On Wed, Feb 29, 2012 at 11:05 PM, AntC anthony_clayden at clear.net.nz 
wrote:
  I repeat: nobody is using a type-level string. You (or someone) is
  making it up.
 
 It isn't clear where that idea came from.
 
 On Mon, Jan 2, 2012 at 4:38 AM, Simon Peyton-Jones
 simonpj at microsoft.com wrote:
  It seems to me that there's only one essential missing language feature,
  which is appropriately-kinded type-level strings (and, ideally, the ability
  to reflect these strings back down to the value level).
 
  * Provide type-level string literals, so that “foo” :: String
 
 Huh.

Thank you Garrett, I feel suitably chided. So the 'culprit' is 'your man 
himself'.

 
 You may want to call your type-level-things-that-identify-fields
 strings, labels, fieldLabels, or rumbledethumps, but surely that's not
 the point of interest here?
 
  /g
 

Ah, but there _is_ a point of interest: under DORF I _must_ call my type-level-
things-etc: **types** (or perhaps proxy **types**),

Because they are only and exactly **types**.

And because they are exactly **types** they come under usual namespace control.

SORF's whadyoumaycalls are at the Kind level. (I'm not opposed to them because 
they're new-fangled, I'm opposed because I can't control the namespace.)

AntC



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-01 Thread J. Garrett Morris
On Wed, Feb 29, 2012 at 11:58 PM, AntC anthony_clay...@clear.net.nz wrote:
 SORF's whadyoumaycalls are at the Kind level. (I'm not opposed to them
 because they're new-fangled, I'm opposed because I can't control the
 namespace.)

Nah, they have kinds, and they don't take parameters, so they're
probably types.  Whether you prefer that foo in module A mean the same
thing as foo in module B is entirely up to you; while it might seem
intuitive to do so, it's also true that if I write

 data List t = Cons t (List t) | Nil

in two different modules, I declare two entirely distinct list types,
even if the natural semantics of the two types might be hard to
distinguish.

 /g

--
Would you be so kind as to remove the apricots from the mashed potatoes?

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-01 Thread AntC
J. Garrett Morris jgmorris at cs.pdx.edu writes:

 On Wed, Feb 29, 2012 at 11:58 PM, AntC anthony_clayden at clear.net.nz 
wrote:
  SORF's whadyoumaycalls are at the Kind level. (I'm not opposed to them
  because they're new-fangled, I'm opposed because I can't control the
  namespace.)
 
 Nah, they have kinds, and they don't take parameters, so they're
 probably types.  Whether you prefer that foo in module A mean the same
 thing as foo in module B is entirely up to you; ...
  /g
 

It's about representation hiding:
- I don't want the importer to even know I have field foo,
- but they can use my field bar

Or perhaps:
- I don't want the importer to update field foo
- but they can read foo, and they can update bar

(This is especially to support using records to emulate OO, where we want 
abstraction/'separation of concerns'.)

If the importer (either maliciously or by accident) creates their own record 
with a foo field, I specifically _don't_ want them to try sharing my 
hidden foo.

AntC


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-01 Thread Ian Lynagh
On Thu, Mar 01, 2012 at 07:58:42AM +, AntC wrote:
 
 SORF's whadyoumaycalls are at the Kind level. (I'm not opposed to them 
 because 
 they're new-fangled, I'm opposed because I can't control the namespace.)

I haven't followed everything, so please forgive me if this is a stupid
question, but if you implement this variant of SORF:


http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields#ScopecontrolbygeneralisingtheStringtypeinHas

then do you get the behaviour of SORF when using field names starting
with a lower-case letter, and DORF when they start with an upper-case
letter?


Thanks
Ian


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-01 Thread Gabriel Dos Reis
On Thu, Mar 1, 2012 at 8:38 AM, Ian Lynagh ig...@earth.li wrote:
 On Thu, Mar 01, 2012 at 07:58:42AM +, AntC wrote:

 SORF's whadyoumaycalls are at the Kind level. (I'm not opposed to them 
 because
 they're new-fangled, I'm opposed because I can't control the namespace.)

 I haven't followed everything, so please forgive me if this is a stupid
 question, but if you implement this variant of SORF:

    
 http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields#ScopecontrolbygeneralisingtheStringtypeinHas

 then do you get the behaviour of SORF when using field names starting
 with a lower-case letter, and DORF when they start with an upper-case
 letter?


 Thanks
 Ian

It is close to a hack (e.g. taking over a special meaning for String)
that has been implemented in the Scratchpad II (now known as AXIOM)
system for over 3 decades.  I found it odd, this maybe for Haskell it
may have a completely different taste. If you have a copy of the AXIOM
book


http://www.amazon.com/AXIOM-Scientific-Computation-Richard-Jenks/dp/0387978550

have a look at the end of page 71.

-- Gaby

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-01 Thread AntC
Ian Lynagh igloo at earth.li writes:

 
 On Thu, Mar 01, 2012 at 07:58:42AM +, AntC wrote:
  
  SORF's whadyoumaycalls are at the Kind level. (I'm not opposed to them 
because 
  they're new-fangled, I'm opposed because I can't control the namespace.)
 
 I haven't followed everything, so please forgive me if this is a stupid
 question, but if you implement this variant of SORF:
 
 
http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields#Scopeco
ntrolbygeneralisingtheStringtypeinHas
 
 then do you get the behaviour of SORF when using field names starting
 with a lower-case letter, and DORF when they start with an upper-case
 letter?
 
 Thanks
 Ian
 

And you get In my opinion, this is ugly, since the selector can be either a 
type name or a label and the semantics are nonsame. Rather, we need scoped 
instances. [SPJ]

So if we open the gate for ugly, do we also open it for hacks and 
for unscalable?

Then we also have a solution for updating higher-ranked typed fields.

I guess this is all for decision by the implementors.

If we need to go into scoped instances, I'd be really scared -- that seems 
like a huge, far-reaching change, with all sorts of opportunity for mysterious 
compile fails and inexplicable behaviour-changing from imports.

I have some creative ideas for introducing overlapping instances; shall I run 
them up the flagpole as well?


AntC



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-01 Thread Ian Lynagh
On Thu, Mar 01, 2012 at 08:52:29PM +, AntC wrote:
 Ian Lynagh igloo at earth.li writes:
 
  
  On Thu, Mar 01, 2012 at 07:58:42AM +, AntC wrote:
   
   SORF's whadyoumaycalls are at the Kind level. (I'm not opposed to them 
 because 
   they're new-fangled, I'm opposed because I can't control the namespace.)
  
  I haven't followed everything, so please forgive me if this is a stupid
  question, but if you implement this variant of SORF:
  
  
 http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields#Scopeco
 ntrolbygeneralisingtheStringtypeinHas
  
  then do you get the behaviour of SORF when using field names starting
  with a lower-case letter, and DORF when they start with an upper-case
  letter?
 
 And you get In my opinion, this is ugly, since the selector can be either a 
 type name or a label and the semantics are nonsame. Rather, we need scoped 
 instances. [SPJ]

That comment was from strake888, not SPJ?

Personally, in the context of Haskell (where the case of the first
character often determines the behaviour, e.g. a pattern of Foo vs foo),
I don't think it's too bad.

 So if we open the gate for ugly, do we also open it for hacks and 
 for unscalable?
 
 Then we also have a solution for updating higher-ranked typed fields.
 
 I guess this is all for decision by the implementors.
 
 If we need to go into scoped instances, I'd be really scared -- that seems 
 like a huge, far-reaching change, with all sorts of opportunity for 
 mysterious 
 compile fails and inexplicable behaviour-changing from imports.
 
 I have some creative ideas for introducing overlapping instances; shall I run 
 them up the flagpole as well?

I'm getting lost again.

But I think you are agreeing that (leaving aside the issue of whether
the design is reasonable) the above variant would indeed allow the user
to choose the behaviour of either SORF or DORF.


Thanks
Ian


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-01 Thread AntC
Ian Lynagh igloo at earth.li writes:

 
 On Thu, Mar 01, 2012 at 08:52:29PM +, AntC wrote:
  
  And you get In my opinion, this is ugly, ...
 
 That comment was from strake888, not SPJ?
 

Thanks Ian, you're right. Specifically, it's 'igloo's tweak to the proposal 
and 'strake888's comment. (I had erroneously thought the whole of that page 
was SPJ's, and I hadn't much re-read it since SPJ posted it.)


 Personally, in the context of Haskell (where the case of the first
 character often determines the behaviour, e.g. a pattern of Foo vs foo),
 I don't think it's too bad.
 

Hmm. Upper case in an expression always means data constructor (or qualified 
name). (You've possibly not been watching the outrage at changing the meaning 
of `.` ;-)

Also this would be ambiguous:

object.SubObject.Field.subField

-- are the `SubObject` and `Field` (private) field selectors,
-- or a qualified name for subField?
-- or perhaps SubObject.Field is a qualified private field selector?

Putting parentheses would cut out some of those interpretations, but not all 
of them??


In terms of scope control, I think (I'm guessing rather) you do get similar 
behaviour to DORF, with the added inconvenience of:
* an extra arg to Has (how does the constraint sugar cope?)
r{ field :: Int } = ...
r{ Field :: Int } = ...   -- ? does that look odd
  -- suppose I have two private namespaces
r{ Field :: Int ::: Field1 } = ... -- ??
r{ (Field ::: Field2) :: Int } = ...   -- ???
* something(?) extra in record decls:
data PublicRecord = Pub { field :: Int }
data PrivateRecord = Priv { Field :: Int }-- ?
data PrivateRecord = Priv { Field :: Int ::: Field2 } -- ??
* a need for equality constraints between Kind and Type
  (that's the ft ~ FieldT bit)
  The class decl and instances are 'punning' on tyvar `ft`
   being both a type and a Kind.
  Is that even contemplated with Kinds?
* a need for something(?) different on record update syntax:
pubRec{ field = 27 }
privRec{ Field = 92 }  -- does upper case there look odd to you?
privRec{ Field = 87 ::: Field2 }

(ugly is a mild reaction, the more I think about it.)

 
 But I think you are agreeing that (leaving aside the issue of whether
 the design is reasonable) the above variant would indeed allow the user
 to choose the behaviour of either SORF or DORF.
 

No, not the user to choose, but the implementor. We can't possibly try to 
support both approaches.

AntC





___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-01 Thread AntC
AntC anthony_clayden at clear.net.nz writes:

 
 Ian Lynagh igloo at earth.li writes:
 
  
  But I think you are agreeing that (leaving aside the issue of whether
  the design is reasonable) the above variant would indeed allow the user
  to choose the behaviour of either SORF or DORF.
  
 
 No, not the user to choose, but the implementor. We can't possibly try to 
 support both approaches.
 

Sorry, I mis-interpreted your last paragraph. I think you meant:

... allow the user to choose [public or restricted namespacing] behaviour 
under either the SORF or DORF proposal.

Yes-ish (leaving aside that issue). Under SORF you hve an extra behaviour:
- use String Kinds and your label is public-everywhere and
   completely uncontrollable.
- (So someone who imports your label can't stop it getting re-exported.)
- This is unlike any other user-defined name in Haskell.

I'm not sure whether to call that extra behaviour a 'feature' (I tend more 
to 'wart'), but it's certainly another bit of conceptual overload.

I prefer DORF's sticking to conventional/well-understood H98 namespacing 
controls.


AntC






___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-01 Thread Ian Lynagh
On Thu, Mar 01, 2012 at 10:46:27PM +, AntC wrote:
 
 Also this would be ambiguous:
 
 object.SubObject.Field.subField

Well, we'd have to either define what it means, or use something other
than '.'.

 In terms of scope control, I think (I'm guessing rather) you do get similar 
 behaviour to DORF, with the added inconvenience of:
 * an extra arg to Has (how does the constraint sugar cope?)

You can infer ft from the f.

 r{ field :: Int } = ...
 r{ Field :: Int } = ...   -- ? does that look odd

Well, it's new syntax.

   -- suppose I have two private namespaces
 r{ Field :: Int ::: Field1 } = ... -- ??
 r{ (Field ::: Field2) :: Int } = ...   -- ???

You've lost me again.

  But I think you are agreeing that (leaving aside the issue of whether
  the design is reasonable) the above variant would indeed allow the user
  to choose the behaviour of either SORF or DORF.
  
 
 No, not the user to choose, but the implementor. We can't possibly try to 
 support both approaches.

I don't follow. You agreed above that you do get similar behaviour to
DORF, and if you just use lowercase field names then the behaviour is
the same as SORF. Therefore both are supported.


Thanks
Ian


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-01 Thread Ian Lynagh
On Thu, Mar 01, 2012 at 11:32:27PM +, AntC wrote:
 AntC anthony_clayden at clear.net.nz writes:
 
  
  Ian Lynagh igloo at earth.li writes:
  
   
   But I think you are agreeing that (leaving aside the issue of whether
   the design is reasonable) the above variant would indeed allow the user
   to choose the behaviour of either SORF or DORF.
   
  
  No, not the user to choose, but the implementor. We can't possibly try to 
  support both approaches.
  
 
 Sorry, I mis-interpreted your last paragraph. I think you meant:
 
 ... allow the user to choose [public or restricted namespacing] behaviour 
 under either the SORF or DORF proposal.

Yes, exactly.

 Yes-ish (leaving aside that issue). Under SORF you hve an extra behaviour:
 - use String Kinds and your label is public-everywhere and
completely uncontrollable.
 - (So someone who imports your label can't stop it getting re-exported.)
 - This is unlike any other user-defined name in Haskell.
 
 I'm not sure whether to call that extra behaviour a 'feature' (I tend more 
 to 'wart'), but it's certainly another bit of conceptual overload.

Right, but other people would prefer the SORF behaviour to the DORF
behaviour.

But note that if this was implemented, then the only difference between
the 3 is in the desugaring. So if you desugar r.f only then you get
SORF, r.F only then you get DORF (well, with different syntax,
probably), and if you desugar both then you get the choice.


Thanks
Ian


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-01 Thread AntC
Ian Lynagh igloo at earth.li writes:

 
 On Thu, Mar 01, 2012 at 11:32:27PM +, AntC wrote:
  AntC anthony_clayden at clear.net.nz writes:
  
   
   Ian Lynagh igloo at earth.li writes:
   

But I think you are agreeing that (leaving aside the issue of whether
the design is reasonable) the above variant would indeed allow the user
to choose the behaviour of either SORF or DORF.

   
   No, not the user to choose, but the implementor. We can't possibly try 
to 
   support both approaches.
   
  
  Sorry, I mis-interpreted your last paragraph. I think you meant:
  
  ... allow the user to choose [public or restricted namespacing] behaviour 
  under either the SORF or DORF proposal.
 
 Yes, exactly.
 
  Yes-ish (leaving aside that issue). Under SORF you hve an extra behaviour:
  - use String Kinds and your label is public-everywhere and
 completely uncontrollable.
  - (So someone who imports your label can't stop it getting re-exported.)
  - This is unlike any other user-defined name in Haskell.
  
  I'm not sure whether to call that extra behaviour a 'feature' (I tend more 
  to 'wart'), but it's certainly another bit of conceptual overload.
 
 Right, but other people would prefer the SORF behaviour to the DORF
 behaviour.
 

Would they? How could we know? Most of the posts here have been from people 
who don't get anywhere near to understanding the issues. There's been a 
voicifereous poster who wants to have lots of fields with the same name and 
have them each mean something different. (No, I don't understand either.)

Under DORF they could get the public-everywhere behaviour by exporting and 
importing unqualified (just like H98!).

 But note that if this was implemented, then the only difference between
 the 3 is in the desugaring. So if you desugar r.f only then you get
 SORF, r.F only then you get DORF (well, with different syntax,
 probably), and if you desugar both then you get the choice.
 
 Thanks
 Ian
 

Sorry Ian, but I've got conceptual overload. I feel I understand DORF 
behaviour not just because I designed it, but also because I can (and have!) 
prototyped it under GHC v7.2, including public-everywhere and controlled 
import/export -- see my additional attachment to the implementor's page.

With Kinds and Stringy Kinds and type-to-Kind equality constraints I feel I 
want to better understand how that affects the design space. I don't think 
that's possible yet, even in v7.4(?)

Right from the beginning of SPJ's SORF proposal, I've had a feeling that ghc 
central, having introduced the new whizzy Kinds, now wants to find a use for 
them. Surely there would be other applications for Kinds that would be clearer 
use cases than patching-up Haskell's kludgy record design?

We're focussing too narrowly on this representation-hiding issue. There are 
other important differences between SORF and DORF (which I've tried to explain 
on my comparison page on the wiki). Nothing you've said so far is being 
persuasive.

(BTW on the comparison wiki, I've put some speculations around using Kinds 
behind the scenes as an implementation for DORF -- implementor's choice. 
Because it's behind the scenes we could use a more compact/specific variety of 
Kind than String. But it's still in danger of suffering the uncontrollable 
public-everywhere issue. Could you suggest an improvement?)

AntC



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-01 Thread Gábor Lehel
On Fri, Mar 2, 2012 at 1:06 AM, Ian Lynagh ig...@earth.li wrote:
 On Thu, Mar 01, 2012 at 11:32:27PM +, AntC wrote:
 Yes-ish (leaving aside that issue). Under SORF you hve an extra behaviour:
 - use String Kinds and your label is public-everywhere and
    completely uncontrollable.
 - (So someone who imports your label can't stop it getting re-exported.)
 - This is unlike any other user-defined name in Haskell.

 I'm not sure whether to call that extra behaviour a 'feature' (I tend more
 to 'wart'), but it's certainly another bit of conceptual overload.

 Right, but other people would prefer the SORF behaviour to the DORF
 behaviour.

Who and why? What's the use case?

I was trying to tease this out at another point in the thread. What
use case is there for which Haskell's normal and familiar
classes-and-instances mode of polymorphism isn't appropriate, and for
which we want to introduce this new and alien
global-implicit-name-based mode of polymorphism?

Another point which could sway in SORF's favour might be easier
implementation, but DORF actually requires less type system magic than
SORF, and also already has a working prototype implementation, so I
don't think that works, either.

Let's look at this from the other direction. The advantage of DORF
over SORF is that it handles record fields in a hygienic way, and
that it works with the module system, rather than around it. What
advantage does SORF have over DORF? My main complaint against DORF is
that having to write fieldLabel declarations for every field you want
to use is onerous. If that could be solved, I don't think there are
any others. (But even if it can't be, I still prefer DORF.)

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-01 Thread AntC
Ian Lynagh igloo at earth.li writes:

 
  * an extra arg to Has (how does the constraint sugar cope?)
 
 You can infer ft from the f.
 

Let me explain better what I mean by two private namespaces, then we'll try 
to understand how your proposal goes ...

  module T where
  data FieldT = Field
  data RecT = RecT{ Field :: Int }
  ...
  module U where
  data FieldU = Field
  data RecU = RecU{ Field :: Bool }
  ...
  module V where
  import T -- also consider either/both
  import U -- imports hiding (Field)
  data RecV = RecV{ Field :: String }  -- am I sharing this Field?
   -- who with?
  ...
  ... r.Field ...   -- is this valid?, if not what is?
  ... r{ Field = e }-- likewise

(Oh yeah, imports and hiding: how do I do that for these non-String-type-Kinds?
And is this allowed?:
  data NotaField = Constr Int Bool
  data AmIaRec = AmI{ Constr :: String }
  ...
  ... r.Constr ...

It's all getting very tangled trying to squeeze constructors into other roles.)


AntC


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-01 Thread Ian Lynagh
On Fri, Mar 02, 2012 at 01:44:45AM +0100, Gábor Lehel wrote:
 On Fri, Mar 2, 2012 at 1:06 AM, Ian Lynagh ig...@earth.li wrote:
 
  Right, but other people would prefer the SORF behaviour to the DORF
  behaviour.
 
 Who and why? What's the use case?
 
 My main complaint against DORF is
 that having to write fieldLabel declarations for every field you want
 to use is onerous.

I believe this is the main concern people have, but I can't speak for
them.


Thanks
Ian


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-01 Thread Ian Lynagh
On Fri, Mar 02, 2012 at 01:04:13AM +, AntC wrote:
 
 Let me explain better what I mean by two private namespaces, then we'll try 
 to understand how your proposal goes ...
 
   module T where
   data FieldT = Field
   data RecT = RecT{ Field :: Int }
   ...
   module U where
   data FieldU = Field
   data RecU = RecU{ Field :: Bool }
   ...
   module V where
   import T -- also consider either/both
   import U -- imports hiding (Field)
   data RecV = RecV{ Field :: String }  -- am I sharing this Field?
-- who with?

Ah, I see. No, you couldn't do that, just as you couldn't do
v = Field
You would need to say
data RecV = RecV{ T.Field :: String }

   ... r.Field ...   -- is this valid?, if not what is?

r!T.Field (I took the liberty of using a random different symbol for
field access, for clarity).

   ... r{ Field = e }-- likewise

r{ T.Field = e }

 (Oh yeah, imports and hiding: how do I do that for these 
 non-String-type-Kinds?
 And is this allowed?:
   data NotaField = Constr Int Bool
   data AmIaRec = AmI{ Constr :: String }

No.


Thanks
Ian


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-01 Thread Isaac Dupree

On 03/01/2012 01:46 AM, AntC wrote:

Isaac Dupreemlat  isaac.cedarswampstudios.org  writes:



In the meantime, I had an idea (that could work with SORF or DORF) :

data Foo = Foo { name :: String } deriving (SharedFields)

The effect is: without that deriving, the declaration behaves just
like H98.

(For super flexibility, allow to specify which fields are shared,
like deriving(SharedFields(name, etc, etc)) perhaps.)

Is it too verbose? Or too terrible that it isn't a real class (well,
there's Has...)?

-Isaac


Thanks Isaac, hmm: that proposal would work against what DORF is trying to do.

You're right about the `deriving` syntax currently being used for classes. The
fact of re-purposing the surface syntax is really no different to introducing
different syntax.

[...]

What you're not getting is that DORF quite intentionally helps you hide the
field names if you don't want your client to break your abstraction.

So under your proposal, a malicious client could guess at the fieldnames in
your abstraction, then create their own record with those fieldnames as
SharedFields, and then be able to update your precious hidden record type.


Show me how a malicious client could do that.  Under DORF plus my 
mini-proposal,


module Abstraction (AbstractData) where
data AbstractData = Something { field1 :: Int, field2 :: Int }
{- or it could use shared field names (shared privately) :
fieldLabel field1 --however it goes
fieldLabel field2 --however it goes
data AbstractData = Something { field1 :: Int, field2 :: Int } deriving 
(SharedFields)

-}

module Client where
import Abstraction
--break abstraction how? let's try...

module Client1 where
import Abstraction
data Breaker = Something { field1 :: Int } deriving (SharedFields)
-- compile fails because there are no field-labels in scope

module Client2 where
import Abstraction
fieldLabel field1 --however it goes
data Breaker = Something { field1 :: Int } deriving (SharedFields)
-- succeeds, still cannot access AbstractData with Client2.field1

module Client3 where
import Abstraction
-- (using standalone deriving, if we permit it for SharedFields at all)
deriving instance SharedFields AbstractData
-- compile fails because not all constructors of AbstractData are in scope


All my mini-proposal does is modify SORF or DORF to make un-annotated 
records behave exactly like H98.



AntC (in an unrelated reply to Ian) :

I prefer DORF's sticking to conventional/well-understood H98 namespacing
controls.


[warning: meta-discussion below; I'm unsure if I'm increasing 
signal/noise ratio]


Since this giant thread is a mess of everyone misinterpreting everyone 
else, I'm not sure yet that DORF's namespacing is well-understood by 
anyone but you.  For example, one of us just badly misinterpreted the 
other (above; not sure who yet).  Would IRC be better? worse?  How can 
the possibly-existent crowd of quiet libraries@ readers who understand 
SORF/DORF/etc. correctly show (in a falsifiable way) that they 
understand? any ideas?  Do people misinterpret DORF this much because 
you posted at least 4000 words[1] without creating and making prominent 
a concise, complete description of its behaviour? (is that right?)


I propose that any new record system have a description of less than 250 
words that's of a style that might go in the GHC manual and that causes 
few if any misinterpretations.  Is that too ambitious?  Okay, it is.


So.  Differently,

I propose that any new record system have a description of less than 500 
words that completely specifies its behaviour and that at least half of 
libraries@ interprets correctly.  (It's fine if the description refers 
to docs for other already-implemented type-system features, e.g. MPTCs 
and kind stuff.[2] )


Should we be trying for such a goal?  (For reference: just SORF's The 
Base Design section is 223 words, and just DORF's Application 
Programmer's view only up to Option One is 451 words. (according to 
LibreOffice.)  Neither one is a complete description, but then, my 
proposed 500 word description wouldn't mention design tradeoffs.  A 
GHC User's Guide subsection I picked arbitrarily[3] is 402 words.)


[1] I counted the main DORF page plus the one you pointed me to, each of 
which is about 2000: 
http://hackage.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFields 
+ 
http://hackage.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFields/ImplementorsView


[2] My sense is that (customer_id r) uses familiar type instance 
resolution [...] is only a precise enough statement if the user 
declared the exact, unedited type of customer_id; and that having 
constraints like r{ customer_id :: Int } would need explanation in 
terms of familiar type inference such as classes.  e.g... in a way that 
would explain r{ SomeModule.customer_id :: Int } (is that allowed?). I 
could try to write such a description and you could tell me where I go 
wrong...


[3] Record field disambiguation 

Re: Records in Haskell

2012-03-01 Thread AntC
Isaac Dupree ml at isaac.cedarswampstudios.org writes:

AntC (in an unrelated reply to Ian) :
 I prefer DORF's sticking to conventional/well-understood H98 namespacing
 controls.

...
 I'm not sure yet that DORF's namespacing is well-understood by 
anyone but you.

No of course I'm not saying DORF's namespacing is well-understood. I mean:

1. H98 namespacing controls are conventional and well understood.

2. DORF uses H98 controls and only them.

Re 2: Partly you'll just have to take my word for it, but mainly the 
implementors will have to prove it to themselves if DORF is ever going to see 
the light of day, so I'd be daft to claim something I didn't have good 
evidence for.

Also there's strong corroboration: there's a prototype implementation attached 
to the wiki. You can download it and compile it (one module importing the 
other), and run it and try to break the abstractions, and demonstrate sharing 
the fields that are sharable.

You can inspect the code to see if I've desugarred my syntax correctly, or 
introduced some trick. (If you see anything 'suspicious', please ask.)

In fact, doing all that would be a far better use of your time (and mine) than 
all that verbiage and word counting.

AntC


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-29 Thread wren ng thornton

On 2/28/12 3:57 AM, AntC wrote:

wren ng thorntonwrenat  freegeek.org  writes:

I'm not sure it's a good proposal, but it seems like the only way to
handle this issue is to (1) introduce a new kind for
semantically-oriented field names,


That's what SORF does: the String Kind


and (2) make the Has class use that
kind rather than a type-level string.


No proposal is using a _type_-level string. Barney's confused you.


I was under the impression that all the working proposals were using the 
Has class, a la:


someFunction :: Has name a = a - Foo
someFunction x = ... (name x) ...

modulo the debate about the value-level syntax for records, and modulo 
the debate about whether Has should be exposed to users or hidden inside 
GHC. Is this no longer the case?


--
Live well,
~wren

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-29 Thread wren ng thornton

On 2/29/12 10:51 PM, wren ng thornton wrote:

On 2/28/12 3:57 AM, AntC wrote:

wren ng thorntonwrenat freegeek.org writes:

I'm not sure it's a good proposal, but it seems like the only way to
handle this issue is to (1) introduce a new kind for
semantically-oriented field names,


That's what SORF does: the String Kind


and (2) make the Has class use that
kind rather than a type-level string.


No proposal is using a _type_-level string. Barney's confused you.


I was under the impression that all the working proposals were using the
Has class, a la:

someFunction :: Has name a = a - Foo
someFunction x = ... (name x) ...

modulo the debate about the value-level syntax for records, and modulo
the debate about whether Has should be exposed to users or hidden inside
GHC. Is this no longer the case?


Ah, it seems the exact nature of the Has class is still being debated. I 
suppose my concern places me in the pro-DORF (or at least anti-SORF) 
camp. Carry on :)


--
Live well,
~wren

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-29 Thread Isaac Dupree

On 02/28/2012 06:40 AM, AntC wrote:

Oliver Batchelorsaulzarat  gmail.com  writes:

Perhaps this situation could occur though?

Module A
fieldLabel name String

Module B
import A -- unknowingly picking up the name label

data Foo = Foo { name :: String } -- uses the name label by accident

So there'd have to be some syntax to make sure you intend to use a
label rather than accidentally use it?
(Not that this is a big issue, the situation is surely minor compared
to sharing unrelated labels all the time)

Oliver



Thanks Oliver, hmm ...


Oliver's example is exactly what I feel weird about (and what I refered 
to as a magical effect).  Sorry for failing to communicate to you 
successfully.


In the meantime, I had an idea (that could work with SORF or DORF) :

data Foo = Foo { name :: String } deriving (SharedFields)

The effect is: without that deriving, the declaration behaves just 
like H98.


(For super flexibility, allow to specify which fields are shared,
like deriving(SharedFields(name, etc, etc)) perhaps.)

Is it too verbose? Or too terrible that it isn't a real class (well, 
there's Has...)?


-Isaac

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-29 Thread AntC
Isaac Dupree ml at isaac.cedarswampstudios.org writes:

 
 In the meantime, I had an idea (that could work with SORF or DORF) :
 
 data Foo = Foo { name :: String } deriving (SharedFields)
 
 The effect is: without that deriving, the declaration behaves just 
 like H98.
 
 (For super flexibility, allow to specify which fields are shared,
 like deriving(SharedFields(name, etc, etc)) perhaps.)
 
 Is it too verbose? Or too terrible that it isn't a real class (well, 
 there's Has...)?
 
 -Isaac
 
Thanks Isaac, hmm: that proposal would work against what DORF is trying to do.

You're right about the `deriving` syntax currently being used for classes. The 
fact of re-purposing the surface syntax is really no different to introducing 
different syntax.

Apart from that, it might work for SORF -- and fall into exactly what I don't 
like about SORF, which is that it sabotages the namespace/module control that 
applies for every other user-defined name in Haskell.

What you're not getting is that DORF quite intentionally helps you hide the 
field names if you don't want your client to break your abstraction.

So under your proposal, a malicious client could guess at the fieldnames in 
your abstraction, then create their own record with those fieldnames as 
SharedFields, and then be able to update your precious hidden record type.

And guessing the fieldnames is dead easy if you've exported the field selector 
function, to allow read-only access -- or so you thought.

Under DORF, despite the client guessing the fieldnames, they can't use them at 
all if you don't export them -- because they're in a sealed-off namespace, 
just like regular Haskell.

AntC




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-29 Thread AntC
wren ng thornton wren at freegeek.org writes:

 
  That's what SORF does: the String Kind
 
  No proposal is using a _type_-level string. Barney's confused you.
 
 I was under the impression that all the working proposals were using the 
 Has class, ...

Yes, but:

 a la:
 
  someFunction :: Has name a = a - Foo
  someFunction x = ... (name x) ...
 

No! Have you read any of the proposals? Which bit of your anatomy did you use?

Note that that syntax is not valid H98. Also note that the `Has` class uses 
three arguments, and in a different sequence to what you show.

That syntax (with 3 args present) turns up in two places:
1. SPJ's SORF proposal
2. Barney Hilken's postings. Please ignore them, is all I can say.
   Neither I nor SPJ had anything to do with them.

The DORF syntax is:

  someFunction :: Has r Proxy_name t = r - t

I prefer the sugar, which is the same for SORF and DORF (SPJ invented it, I 
stole it unashamedly):

  someFunction :: r{ name :: t } = r - t

The difference between the desugarring is crucial:
- SORF desugars to a String Kind -- hence Has r name t = ...
the name in quotes represents a String Kind

- DORF desugars to a (proxy) type, not some new-fangled Kind
 Has r Proxy_name t = ...
  That is H98 (with multi-param type classes).

I propose we use the sugar, so that the implementor can decide how to, um, 
implement.

 modulo the debate about the value-level syntax for records, and modulo 
 the debate about whether Has should be exposed to users or hidden inside 
 GHC. Is this no longer the case?

Was _never_ the case, you've been paying too much attention to the wrong 
postings. Look at the wiki pages: that's why I posted it.

I repeat: nobody is using a type-level string. You (or someone) is making it 
up.

This is beginning to exasperate me. Read the wikis: can you see type-level 
string? Certainly not on the DORF pages, I'm pretty sure not on the SORF 
pages. If you find it somewhere else, tell me and I'll get it changed.

AntC



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-29 Thread J. Garrett Morris
On Wed, Feb 29, 2012 at 11:05 PM, AntC anthony_clay...@clear.net.nz wrote:
 I repeat: nobody is using a type-level string. You (or someone) is
 making it up.

It isn't clear where that idea came from.

On Thu, Sep 15, 2011 at 7:51 AM, Simon Peyton-Jones
simo...@microsoft.com wrote:
 Yes, it would, and of course any impl of TDNR would need an internal
 constraint similar to your Select.  In my original proposal I was
 hiding that, but it has to be there in the implementation.  But you
 are right that making it explicit might be a good thing.  Esp with
 Julien's new kind stuff (coming soon) we should be able to say

 class Select (rec :: *) (fld :: String) where
   type ResTy rec fld:: *
   get :: rec - ResTy rec fld

 data T = MkT { x,y :: Int }
 instance Select T x where
   get (MkT {x = v}) = v

Oh.

On Mon, Jan 2, 2012 at 4:38 AM, Simon Peyton-Jones
simo...@microsoft.com wrote:
 It seems to me that there's only one essential missing language feature,
 which is appropriately-kinded type-level strings (and, ideally, the ability
 to reflect these strings back down to the value level).
snip
 Specifically

 * Allow String as a new kind

 * Now you can define classes or types with kinds like

 MyCls :: String - a - Constraint

 T :: String - *

 * Provide type-level string literals, so that “foo” :: String

Huh.

You may want to call your type-level-things-that-identify-fields
strings, labels, fieldLabels, or rumbledethumps, but surely that's not
the point of interest here?

 /g

--
Would you be so kind as to remove the apricots from the mashed potatoes?

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-28 Thread AntC
Barney Hilken b.hilken at ntlworld.com writes:

 
 
  Please remember SPJ's request on the Records wiki to stick
  to the namespace issue. We're trying to make something
  better that H98's name clash. We are not trying to build
  some ideal polymorphic record system.
 
 I must admit that this attitude really gets my hackles up. 
 
 Barney.
 

So Barney, the obligation on you is clear:
- pick one of the many theoretically sound clean designs for records
- (or make up your own)
- write it up on the wiki
- show what work it would need to get there from where we are
- encourage discussion on this forum

One thing I can be pretty sure of: proposals not written up won't get 
implemented.

AntC




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-28 Thread AntC
wren ng thornton wren at freegeek.org writes:

 
 I'm not sure that I like the current proposals for how to control the 
 non/automatic-ness of polymorphism (for reasons I can spell out later, 
 if desired). But we definitely want to have something that's a bit more 
 cultured than simply making all record projectors polymorphic over records.
 

Wren, I'm not sure if you've got it straight. (It's a subtle issue.) This is 
an area where SORF differs from DORF:
- SORF can't hide the representation of a given field name
  (so a client program can 'guess' a field identifier)
  That's because SORF is driven by String Kind, which cannot be scope 
controlled.

- DORF uses (Proxy) types for (roughly) the same purpose as the String Kinds.
  But because they're types, you can control the scope, and keep the 
abstraction.


AntC


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-28 Thread AntC
wren ng thornton wren at freegeek.org writes:
 
 FWIW, this is the concern I alluded to earlier. Namely that we may want 
 to have two (or more), er, 'classes' of records--- where a field is 
 polymorphic over an individual class, but we don't want those classes to 
 merge simply because they happened to choose the same name and type for 
 the field.
 

I agree 'classes' is a misleading word for that. Isn't what you want two (or 
more) namespaces for records -- that is, for their fields?

This is the DORF approach. (Barney seems to be describing the SORF approach -- 
he's got DORF wrong.)

One of the namespaces includes Product.name; another Person.name. (So I'm 
taking a namespace as equivalent to a module.) You can have multiple records 
with a `name` field in each module (Customer, Employee, Business Contact, 
etc) -- so this is already better than H98.)

Providing you're coding in a module that imports only one of those namespaces, 
you can use `name` unqualified.

If you're coding in a module that imports both, you must use `name` qualified.

If you try to apply (Product.name customer) you'll get a type failure (no 
instance).



 I'm not sure it's a good proposal, but it seems like the only way to 
 handle this issue is to (1) introduce a new kind for 
 semantically-oriented field names,

That's what SORF does: the String Kind

 and (2) make the Has class use that 
 kind rather than a type-level string.

No proposal is using a _type_-level string. Barney's confused you.

DORF uses a type (regular importable/qualifiable/hidable) with a prefix to the 
field name:
data Proxy_name

Where you're in a module that imports both the `name`s per above, the 
desugarrer would generate Product.Proxy_name and Person.Proxy_name.

(That's all rather awkward to get right, which is why I prefer the sugar.)

 By (1), what I mean is that rather 
 than referring to the field as name, we would declare PersonalName and 
 BrandName and then use those in lieu of the string. And if we do that, 
 then (2) demands that we must somehow make explicit which one we mean, 
 should we want the `name` field to be polymorphic for some given record 
 declaration.
 

AntC



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-28 Thread AntC
Barney Hilken b.hilken at ntlworld.com writes:

 
  My objection is that I'm not sure if there is ever a case where you
  really want things to be polymorphic over all records.
 
 Well, I don't have a simple, really convincing example, but there are 
certainly things I want to play with.
 More importantly, DORF automatically attaches one class to each label, but 
this is often not what you want.

Barney, you seem to be very confused. Added to that you're mixing up DORF with 
SORF. Agreed both proposals are similar, but there are crucial differences and 
you've completely befuddled them.

In DORF (and SORF) there is only one class -- namely `Has`, with methods `get` 
and `set`.

SORF 'attaches' one Kind for each label. (I'm not sure 'attaches' is the right 
word -- perhaps 'provides' is better? It's a String Kind same as the label 
name.)
In DORF you must _declare_ a _type_ for the label. (Hence **Declared** 
Overloaded Record Fields.) Since it's declared and it's a type, it has usual 
namespace (module) control. You can declare as many as you want, providing you 
respect the namespacing.


 For example, if you have two fields firstname and lastname the 
associated classes are less useful:
 what you really want is 
 
   class (Has r firstname String, Has r lastname String) = 
HasPersonalName r
 

That example is a SORF declaration: it uses String Kinds.

The DORF equivalent would be:

  class (Has r Proxy_firstname String, Has r Proxy_lastname String) =
   HasPersonalName r
Note: familiar Haskell proxy types, _not_ new/untried String Kinds.

That Proxy stuff is a mouthful, and easy to mistype. I prefer the sugar:
  class (r{firstname, lastname :: String} ) =  ...


 so that you can define
 
  fullname :: HasPersonalName r = r - String
  fullname r = r.firstname ++   ++ r.lastname
 
 You may also want to define subclasses to express more specific conditions. 
In general, the compiler
 cannot automatically deduce what is semantically important: you need to 
define it yourself. The Has
 class is the base on which you can build.

 ...

 My approach achieves the same as
 DORF (and more), but using existing language features instead of introducing 
new ones.
 
 Barney.
 

What you say there applies to SORF, not DORF. DORF deliberately uses existing 
class features and familiar type instance resolution. (Because I didn't like 
the 'experimental' Kinds in SORF, and you couldn't control their namespace.)

So what you call My approach is almost identical to DORF -- except that 
you're confusing it with SORF syntax. What you're criticising is SORF, not 
DORF.

AntC





___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-28 Thread AntC
Isaac Dupree ml at isaac.cedarswampstudios.org writes:

 
 
 Darn, I misinterpreted DORF.  There was too much text and too many options.
 

Sorry! You might do better to stick to the implementor's page if you want a 
less 'option-laden' description. Also better to skip the speculative bits 
until you've got the basics clear.


 Tell me if I'm correct:
 A. Every declaration with record syntax creates Has instances for all 
 fields [1].

Correct

 B. Has, get and set may not be written by users (guessing due to 
 representation-hiding fail).

Correct: I want the implementors to have design space for the mechanics behind 
the scenes.

 C. You create functions using fieldLabel name [...]

Correct-ish: create both functions and a proxy types. It's really the type 
that drives inference, not the function.

 D. which have the magical effect of, when in scope unqualified, ...

Nothing 'magical' going on: they're ordinary functions and types, with 
ordinary import/export/hiding. And you can use them qualified if you'd rather.

 ... causing 
 data types defined with record syntax to be accessible through that 
 particular fieldLabel function (and no other way).

The fieldLabel function behaves very similar to the H98-generated function. 
The difference is with DORF it's overloaded, but H98 is monomorphic.

You can still access the fields as per H98 through pattern match (using the 
data constructor), or positionally. [Yes I know that if we were designing 
a 'true' polymorphic record system we'd ban positional access.]

 E. (When two fieldLabels of the same name are in scope unqualified, 
 declaring a record containing that name is an error.)

Correct. Just like having any other clash of names in scope (for example all 
the competing declarations of `map`). And just like those, you can use module 
qualifiers to resolve the clash.

 F. So adding an import (for some other reason for your code) that 
 happens to include a fieldLabel can make your records accidentally be 
 more visible, rather than be compile-error or no-effect.

Wrong: You cannot use a fieldLabel `name` declared in module/namespace A to 
access a record with a field `name` declared in module B. You'll get a 'no 
instance' compile fail. Same familiar rules as for any instance resolution.

This is the crucial difference compared to SORF: which can't control the scope 
of its String Kind. (Apologies that I added a speculative discussion of 
whether DORF could use String Kinds. I said that if doing so would open 
the 'back door' to the abstraction, then I'll stick with types.)

 
 I feel weird about record fields having an option that depends on 
 whether something's in scope and cannot be controlled syntactically. 
 Maybe we can fix that without making the syntax worse.
 

Absolutely agree. I designed DORF to correct that deficiency in SORF (as I saw 
it).


 G. It is possible (but rather ugly) to use dot-notation when there are 
 multiple fieldNames of the same name in scope. [2]
 

Yep, agree with the ugly.


 Hmm.  Maybe this is Haskelly as well as convenient enough.  Did I get 
 everything right?  What do you think about my concern about F?
 

Well done! Nearly everything. I hope I've allayed your concerns re F.

AntC


 [1] 
 
http://hackage.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFields
/ImplementorsView
 [2] 
 
http://hackage.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFields
/DotPostfix#UsingDotnotationamongstqualifiednames
 





___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-28 Thread Oliver Batchelor
Hi,


 Wrong: You cannot use a fieldLabel `name` declared in module/namespace A to
 access a record with a field `name` declared in module B. You'll get a 'no
 instance' compile fail. Same familiar rules as for any instance resolution.

 This is the crucial difference compared to SORF: which can't control the scope
 of its String Kind. (Apologies that I added a speculative discussion of
 whether DORF could use String Kinds. I said that if doing so would open
 the 'back door' to the abstraction, then I'll stick with types.)


Perhaps this situation could occur though?

Module A
fieldLabel name String

Module B
import A -- unknowingly picking up the name label

data Foo = Foo { name :: String } -- uses the name label by accident

So there'd have to be some syntax to make sure you intend to use a
label rather than accidentally use it?
(Not that this is a big issue, the situation is surely minor compared
to sharing unrelated labels all the time)

Oliver

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-28 Thread AntC
Barney Hilken b.hilken at ntlworld.com writes:

 
 After more pondering, I finally think I understand what the DORFistas want. 

Barney,

1) please don't personalise. I designed DORF, built a proof of concept, got it 
running, asked for feedback (got some very useful thoughts from SPJ), built a 
prototype, posted the wiki pages. I'm not a DORFista; I'm somebody who wants 
to improve the Haskell records namespace issue.

2) whether or not you think you know what some people want, you don't 
understand DORF, and you've mixed it up with SORF. You've then caused a great 
long thread of confusion.

In particular, at exactly where DORF is designed to avoid (what I see as) a 
weakness in SORF, you've alleged DORF has that weakness.

Here is an example:
 ...
(by the way, you've used SORF syntax in those examples)

 It doesn't make any sense to apply your functions to my records or vice-
versa,

Exactly! and that's what the DORF design avoids, whereas SORF suffers from it.

 but because we both chose the
 same label,

SORF uses the same label in the sense of the same String Kind.

 the compiler allows it. Putting the code in separate modules makes no 
difference, since
 labels are global.

DORF's labels are not global, they're proxy _types_ so that the scope is 
controlled in the usual way. So using separate modules makes all the 
difference.

 
 Here is a simple solution, using SORF:
 ...

I think your solution would work just as well 'translated' into DORF.

(But then it's a solution to something that isn't a problem in DORF.)

 
... than building the mechanism in to the language as DORF does, ...
 
 Barney.
 

DORF is not building the mechanism in to the language, nor is it introducing 
any new language features, only sugar. The prototype runs in GHC v7.2.1. All 
I've done is hand-desugarred. (Look at it for yourself, it's attached to the 
implementor's page.)

SORF, on the other hand, needs user-defined Kinds, which are only just being 
introduced in v7.4, and don't yet include String Kind.

AntC





___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-28 Thread AntC
Gábor Lehel illissius at gmail.com writes:

 
 2012/2/25 Gábor Lehel illissius at gmail.com:
  Please correct me if I've misunderstood or mischaracterized any aspect of 
DORF.
 
 Okay, I did end up misunderstanding and mischaracterizing at least two
 aspects of DORF.
 
 Re-reading the wiki page:
 
 
http://hackage.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFields
 

Sorry, Gábor, but you've still mis-understood. I tried so hard to explain it 
clearly!

 it's clear that you would not have to write fieldLabel declarations
 for every single field of every single record,

But yes you do have to declare every fieldLabel that's going to appear in a 
record decl within a module compiled under DORF. (Even if the field only 
appears once in one record -- sorry!)

You can, though, mix records/fields from modules compiled under H98 monomorphs 
(providing no clash of names!)

 only for the ones you
 wish to be shared and usable polymorphically. By default, fields of
 individual records would be specific to that record (monomorphic in
 the type of the record),

No! the record decl would (try to) generate an instance for every field of 
every record, then the compile would fail because there was no fieldLabel 
declared.

 So the difference between DORF and my variant would be:
 ...

(You've misunderstood DORF, so got the next bit wrong.)

 
 It wasn't clear to me before that DORF retains record-monomorphic
 fields, ...
(It doesn't!)
 In DORF you can
 presumably still use a record-monomorphic field selector to help infer
 the concrete type of the record

Kind-of: you can declare a fieldLabel with a monomorphic type (if you want it 
to only appear in a single record), then it helps type inference.

 ... 
 One troubling consequence of DORF -- again, if I'm understanding
 things correctly -- is that due to implicit field instances a module
 import can change the meaning of your program:

No you aren't understanding correctly, so: no, a module import can't change 
the meaning. (It might mean the program fails to compile, due to name clash.)

I've responded to same later posts to clarify this.

AntC


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-28 Thread AntC
Henrik Nilsson nhn at Cs.Nott.AC.UK writes:

 
 Hi,
 
 Just checking my understanding here as I have not followed this thread
 in all its details.
 
 So, with both DORF ''', am I correct in understanding
 that polymorphic fields, be it universally quantified as in
 

Good question Henrik! It's explicitly answered in the wiki, because it's a 
tricky area. Briefly:
- both of those varieties of poly fields are possible
- both can be declared in records
- both can be extracted and applied in polymorphic contexts
- DORF supports updating the universally quantified,
  including changing the type of the field and therefore of the record.
  (Also there's a Quasifunctor proposal for SORF to do that.)
- Neither approach supports updating the existential/higher-ranked variety.

(There are all the complexities you discuss in detail. They are solved (with 
quite a bit of complexity behind the scenes), except for updating h-r types.)

You can still use explicit data constructurs to pattern match and update h-r 
fields.

Question for you: (you've already given an example of wanting to update a 
universally quant'd field and change its type)
   Do you want to update a h-r field?
   If so, what's the use case?

AntC

 data ARecordType a =
  C1 {
  ...,
  fieldX :: a,
  ...,
  fieldY :: a - a,
  ...
  }
 
 or existentially quantified as in:
 
 data AnotherRecordType =
  forall a . C2 {
  ...,
  fieldU :: a,
  ...,
  fieldV :: a - Int,
  ...
  }
 
 would no longer be possible?
 
 Note that the type variable a in both cases scope just over the
 constructor(s) of the data type in question. So any attempt at
 declaring the types of the fields outside of this context,
 be it explicitly with the fieldLabel notation, or implicitly as
 per your proposal, would not be possible. E.g.
 
 fieldLabel fieldY :: a - a
 
 would presumably mean
 
 fieldLabel fieldY :: forall a . a - a
 
 resulting in ARecordType becoming second-order polymorphic
 where the value of fieldY would *have* to be a polymorphic function,
 which is very different from the original definition.
 
 Similarly, the whole point with the existentially quantification is
 to allow a number of fields to share some specific but arbitrary type,
 which again means that any attempt to type these fields outside of the
 context of the datatype to which they belong would result in something
 different.
 
 Note that fieldU and fieldV cannot be used as projection functions
 due to escaped type variables, but that field selection by pattern
 matching is perfectly fine.
 
 Both constructions above are very useful and, I'd argue that a design
 that rules them out actually is a rather poor fit for a language like
 Haskell.
 
 To be completely honest, I, at least, would be much happier keeping
 working around the present limitations of Haskell's named fields by
 picking my field names carefully, than losing the above.
 
 Or am I missing something? E.g. is the idea that sharing of fields
 only applies to fields of monomorphic type, i.e. whose type can be
 declared globally?
 
 Best,
 
 /Henrik
 





___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-28 Thread AntC
J. Garrett Morris jgmorris at cs.pdx.edu writes:

 
 Has x f has no semantic content besides type x having an f field; Ord
 has (at least in the programmer's mind, even if the language can't check
 it) meaning beyond the simple presence of a compare function.
 
  /g
 

Note that under both SORF and DORF, there are three arguments to the `Has` 
class. The third is specifically to spec or constrain the type of the result.

A decl:

data Rec a = Ord a = Rec{ flda :: a }

Expects:

flda :: (r{ flda :: a }, Ord a) = r - a

Where the {...} in the constraint is sugar for the Has class. (DORF and SORF 
differ slightly in how that's implemented.)

AntC


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-28 Thread AntC
Barney Hilken b.hilken at ntlworld.com writes:

 
 
  This should be used to generate
  internal constraints and not be exposed to the end user and not
  automatically abstract over fields.
 

Barney, I'm not going to try to defend or explain what Greg means by that 
comment, but I do want to talk about:

 insistence that the Has class should be hidden from the user.
 ...
 I think the Has class would be useful to programmers and
 no harder to understand than other Haskel classes. It should not be hidden.
 
 Barney.
 

I agree that what the `Has` class is doing is useful, and indeed the 
programmer needs to understand it.

SPJ suggested syntactic sugar for the Has class. The DORF proposal embraces it 
whole-heartedly, and expects the programmer would use the sugar. There's two 
reasons for avoiding writing explicit `Has`:

1. It's tricky syntax and easy to get wrong.
   Specifically, all your examples in later posts have confused it.
2. The implementation of overloadable record fields (whether SORF or DORF)
   is going to be tricky to get right. We'll need equality and class
   constraints. There needs to be some space for the implementors to tune
   the design.

I think the sugar is hiding the implementation, not the existence of the `Has` 
class -- as is good practice.

AntC





___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-28 Thread AntC
Greg Weber greg at gregweber.info writes:

 
 I looked at the DORF implementers view
 
http://hackage.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFields
/ImplementorsView
 
 It appears that we still have no solution for record updates that
 change the type of a polymorphic field.

There's two varieties of polymorphic fields:
- the type-changing variety
  DORF has a solution for update/changing the type of field and record
 (always has since the proof of concept in December)
  SORF has a speculative suggestion of Quasifunctor
(looks like this needs tightening up??)
- the higher-ranked variety
  neither proposal has a way of updating
 (well, DORF did but it's an unscalable hack [SPJ],
  I put it on the wiki anyway, in case somebody can improve it)


AntC


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-28 Thread AntC
Oliver Batchelor saulzar at gmail.com writes:

 
 Hi,
 
 
  Wrong: You cannot use a fieldLabel `name` declared in module/namespace A to
  access a record with a field `name` declared in module B. You'll get a 'no
  instance' compile fail. Same familiar rules as for any instance resolution.
 
  This is the crucial difference compared to SORF: which can't control the 
scope
  of its String Kind. (Apologies that I added a speculative discussion of
  whether DORF could use String Kinds. I said that if doing so would open
  the 'back door' to the abstraction, then I'll stick with types.)
 
 
 Perhaps this situation could occur though?
 
 Module A
 fieldLabel name String
 
 Module B
 import A -- unknowingly picking up the name label
 
 data Foo = Foo { name :: String } -- uses the name label by accident
 
 So there'd have to be some syntax to make sure you intend to use a
 label rather than accidentally use it?
 (Not that this is a big issue, the situation is surely minor compared
 to sharing unrelated labels all the time)
 
 Oliver
 

Thanks Oliver, hmm ...

Did module B import A unqualified?

Did module B not have its own declaration of fieldLabel `name`?
And presumably module B has set the option to use DORF.

Then DORF is going to take it that you mean to share the `name`. (And actually 
I don't see much harm resulting.)

Them's the rules.

If there's a fieldLabel `name` in Module B, Foo will use that.
If furthermore B imports A unqualified, that's a clash of fieldLabels, so 
compile fail.
If module B is compiled with H98 style records, there's a name clash with the 
H98 field selector function.

I think this is no worse (and no better) than business-as-usual 'accidental' 
usage-without-declaration matching an unknowingly imported binding.

(As part of the 'minimal changes' requirement, I'm trying to avoid syntax 
changes to record decls.)

AntC




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-28 Thread AntC
Oliver Batchelor saulzar at gmail.com writes:

 
 I think the DORF approach is quite principled in it's namespacing. The
 labels are just normal functions which can be exported and imported
 between modules. I believe that is it's main strength - so I think to
 say it only solves the narrow name-spacing issue within a module. is
 not quite accurate.
 

Thank you Oliver, yes you got it. And very restrained of you to say not quite 
accurate. In the following I shall try to do the 'egoless programmer' thing.

Seeing as DORF's control over namespacing is a considerable improvement over 
SORF (in my view), I'm particularly miffed to see an allegation that it can't 
do EXACTLY WHAT IT'S DESIGNED TO DO.

And since that allegation has been repeated in several places, just so 
there's no probable possible shadow of doubt, no possible doubt whatever, 
I've updated the wiki with an illustration:
http://hackage.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFields
#ImportExportandRepresentationhiding
and attached a working example to the implementor's page.

The example shows that within a single record decl:
  0. You can import fieldLabel decls.
  1. You can create fields that share Labels with imports.
  2. You can create fields that don't share, even with the same Label name.
   (That is, the module system continues to control the namespace.)
  3. You can prevent using the wrong field selector with the wrong record type,
even if they have the same Label name.

Points 2 and 3 are there especially for all the people who want multiple 
`name`s that label String fields but don't mean the same thing.

The example shows that if you use the wrong `name` selector with the wrong 
record type, you get an instance failure (even if the record type has a field 
`name`).

Just before they slip on the straightjacket for all this mumbling about what's 
in a name, one further point:

- if there's a difference of meaning going on,
- you can use the type system to manage your meanings,
- with a newtype to screen the representation.

That's what Chris Done's example is doing, and it's a good discipline to 
manage complex data structures. (Thank you Greg for posting it back on to the 
Records wiki front page). 
http://hackage.haskell.org/trac/ghc/wiki/Records#Problemswithusingthecurrentmod
ulenamespacemechanism

Chris's post is what started me down the track of designing DORF, as an 
improvement over SORF, and to avoid the suggestions of sub-modules.

AntC





___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-27 Thread AntC
Greg Weber greg at gregweber.info writes:
 
 
  What on earth do you mean by not automatically abstract
  over fields?
 
 Abstraction over fields is the ability to write a function that works
 on two records with the same field label.
 

Thanks Greg, I see you've put something on the wiki about abstract over 
fields. The example code doesn't seem to be right, so I'm still confused what 
you mean ...

The example has:
getA = r.a

I think that should be:
getA r = r.a

(Otherwise how does `getA` know which record to extract the `a` from?)

And you give no type signature for getA so (at a wild guess) by abstract you 
mean that `getA` extracts the `a` from any record `r` with an `a` field(?)

Under SORF: r.a desugars to a call to the `get` method.

Under DORF: r.a desugars to (a r), then we can eta-reduce:
getA r = r.a == (a r)
getA = a

Either way, I don't see that `getA` is adding anything over plain field `a`.

There _is_ a difference between DORF and SORF here. In DORF I can declare:

getF f r = r.f -- works for any field of any record
   -- RHS desugars to (f r), so getF === ($)

And can use it, for example:
getF lastName cust1
getF fullName person2

I don't think you can do this is SORF (but please check with SPJ). In 
particular, I don't think you could call this function and pass an argument 
into it for the field name.

That's because in SORF the dot notation is desugarred at the point it occurs 
(according to the wiki on SORF), and the `f` appearing on the RHS is a bound 
variable, not a field name as such. (In fact, I wonder if SORF would take the 
dot notation and try to desugar it to field f, then either get a type 
failure or (by accident) extract the wrong field.)

Please correct this on the wiki.

AntC



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-27 Thread J. Garrett Morris
On Mon, Feb 27, 2012 at 4:52 PM, AntC anthony_clay...@clear.net.nz wrote:
 And can use it, for example:
    getF lastName cust1
    getF fullName person2

 I don't think you can do this is SORF (but please check with SPJ). In
 particular, I don't think you could call this function and pass an argument
 into it for the field name.

You only need some way to write a value of a type of kind String; the
section should Get have a proxy argument of the SORF wiki page
discusses approaches to this problem.  This example demonstrates the
flexibility of the Proxy-based approach, but does not distinguish
between DORF and SORF.

 /g

P.S. Perhaps we should find record proposals that don't sound like
noises my cat makes when coping with a hairball? ;)

--
Would you be so kind as to remove the apricots from the mashed potatoes?

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell -- updating higher-ranked fields: required?

2012-02-27 Thread AntC
Greg Weber greg at gregweber.info writes:

 
 
  No, I don't think anybody has a satisfactory approach to
  updating polymorphic/higher-ranked fields. (DORF mentions
  one, but it's a ghastly hack.
 
 So are the proposals dead until this is tackled, or should SORF/DORF
 propose not to allow that?
 

That's an important question. I've asked in the DORF proposal how big is the 
demand to update h-r fields?

Note that is something you can do currently with H98 records/fields.

Is it good enough to be able to extract and use h-r fields in polymorphic 
contexts? (Both SORF and DORF can do that.)

Is it good enough to be able to create records with h-r fields (using the data 
constructor)?  (Both SORF and DORF can do that.)


AntC


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-26 Thread wren ng thornton

On 2/26/12 12:38 AM, Anthony Clayden wrote:

Wren/all

Please remember SPJ's request on the Records wiki to stick
to the namespace issue. We're trying to make something
better that H98's name clash. We are not trying to build
some ideal polymorphic record system.


I believe my concern is a namespace issue. There are certain 
circumstances under which we do not want names to clash, and there are 
certain circumstances under which we do want them to clash; just as 
sometimes we want things to be polymorphic and sometimes not.


I haven't been following all the different proposals out there, but the 
ones I did see before tuning-out all took the stance that for each given 
field either (1) this field name is unique and always clashes, or (2) 
this field name is shared and never clashes. This is problematic for a 
number of reasons. The particular reason I raised is that there are 
times when we would like for a field name to be shared, but only shared 
among a specified group of records and clashing with all other records 
(which may themselves form groups that share the name as well).


That's not a complaint against DORF per se. I haven't read the DORF 
proposal, so perhaps it already handles this issue. Rather, it's a 
general concern that I haven't seen discussed very much while skimming 
this thread.


--
Live well,
~wren

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-26 Thread Barney Hilken
 
 I'm not sure it's a good proposal, but it seems like the only way to handle 
 this issue is to (1) introduce a new kind for semantically-oriented field 
 names, and (2) make the Has class use that kind rather than a type-level 
 string. 

The second half of my message showed exactly how to handle the problem, using 
nothing more than existing Haskel features (and SORF for the record fields). 
The point is that the extra complexity of DORF is completely unnecessary.

Barney.



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-26 Thread Barney Hilken

 Please remember SPJ's request on the Records wiki to stick
 to the namespace issue. We're trying to make something
 better that H98's name clash. We are not trying to build
 some ideal polymorphic record system.

I must admit that this attitude really gets my hackles up. You are effectively 
saying that, so long as the one narrow problem you have come across is solved, 
it doesn't matter how bad the design is in other ways. This is the attitude 
that gave us the H98 records system with all its problems, and the opposite of 
the attitude which gave us type classes and all the valuable work that has 
flowed from them. Haskel is supposed to be a theoretically sound, cleanly 
designed language, and if we lose sight of this we might as well use C++. 
Whatever new records system gets chosen for Haskel, we are almost certain to be 
stuck with it for a long time, so it is important to get it right.

Barney.



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-26 Thread Jerzy Karczmarczuk

Barney Hilken :

Haskel is supposed to be a theoretically sound, cleanly designed language, and 
if we lose sight of this we might as well use C++.
Well, since I have nothing to say about new records, I don't say 
anything, but I have the impression that when we got to this level of 
discussion, it is a beginning of the end. Veeery, very funny...


Imagine an ecclesiastic General Council, and the Pope saying:

Brothers Bishops! Our new dogmas must be absolutely flawless, pure and 
sound, otherwise we might as well become Muslims.



Inchaa whatever.

Jerzy Karczmarczuk
Caen, France


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-26 Thread Greg Weber
The DORF proposal is bringing to light some universal issues with
records, so I am glad they are being hashed out. However, at this
point it is premature optimization: we still don't have a proposal
that solves the narrow issue of record name-spacing with Haskell.

At this point SORF/DORF need a hero to figure out how to make them
work with all of Haskell's current type capabilities. The DORF
proposal makes some steps forward, but also backwards: it only solves
the narrow name-spacing issue within a module. If a record is imported
into another module, it will still clash.

I stated this months ago, and I think it is even truer now: the sugar
approach to records does not appear to actually be simplifying things,
therefore we should consider adding a new first-class construct.

I don't know much about the subject of first-class records, but so far
I have come across a few styles of existing implementations in FP:
structural typing, records as modules, and row types.
I recently linked to Ur's extensible record impementation (that uses
row types) from the wiki:
http://adam.chlipala.net/papers/UrPLDI10/UrPLDI10.pdf

We are trying to stay focused on the narrow issue of solving
name-spacing. I think we can stay narrow if we do implement first
class records but hold off for now on presenting any special
capabilities to the programmer.

At this point we are months into the records process without a clear
way forward. I think we should be willing to take any workable
implementation and just avoid exposing the implementation details for
now. If anyone can lend a hand at figuring out SORF updates or
determining if type inference of records in the Ur paper can be made
to work in Haskell, that would be very helpful!

Greg Weber

On Sun, Feb 26, 2012 at 7:01 AM, Jerzy Karczmarczuk
jerzy.karczmarc...@unicaen.fr wrote:
 Barney Hilken :

 Haskel is supposed to be a theoretically sound, cleanly designed language,
 and if we lose sight of this we might as well use C++.

 Well, since I have nothing to say about new records, I don't say anything,
 but I have the impression that when we got to this level of discussion, it
 is a beginning of the end. Veeery, very funny...

 Imagine an ecclesiastic General Council, and the Pope saying:

 Brothers Bishops! Our new dogmas must be absolutely flawless, pure and
 sound, otherwise we might as well become Muslims.


 Inchaa whatever.

 Jerzy Karczmarczuk
 Caen, France



 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-26 Thread Oliver Batchelor
Hi Greg,

(Apologies for second mail, I didn't include the list)

I think the DORF approach is quite principled in it's namespacing. The
labels are just normal functions which can be exported and imported
between modules. I believe that is it's main strength - so I think to
say it only solves the narrow name-spacing issue within a module. is
not quite accurate.

Sure - if you have two unrelated modules - say Data.Foo and Data.Bar
each with records with fields x and y they will clash. But this is a
very common situation e.g. how many functions called map are defined
in various modules?

If the modules are related, however - we can re-use the same label
without problem (in the same way we can define a type class Functor
for all the various map functions). I don't think it is so important
that we have globally common labels - if anything I would think that
would be an engineering goal to avoid? (Imagine how many labels called
x with different types may spring up)

-- First you create the labels:

module A (width, height)

width :: r { width :: Float } = r - Float
height d :: r { height :: Float } = r - Float

-- We can use them in one module.

module B (Rectangle (..)) where

import A (width, height)
data Rectangle = Rectangle { width, height }  -- Potentially don't
need to give these types, since they're already defined by the label

module C (Box (..)) where

import A (width, height)
length d :: r { length :: Float } = r - Float

data Box = Box { width, height, length }  -- Use the same fields again

I've been following the discussion with interest.

Cheers,
Oliver

On Mon, Feb 27, 2012 at 5:47 AM, Greg Weber g...@gregweber.info wrote:
 The DORF proposal is bringing to light some universal issues with
 records, so I am glad they are being hashed out. However, at this
 point it is premature optimization: we still don't have a proposal
 that solves the narrow issue of record name-spacing with Haskell.

 At this point SORF/DORF need a hero to figure out how to make them
 work with all of Haskell's current type capabilities. The DORF
 proposal makes some steps forward, but also backwards: it only solves
 the narrow name-spacing issue within a module. If a record is imported
 into another module, it will still clash.

 I stated this months ago, and I think it is even truer now: the sugar
 approach to records does not appear to actually be simplifying things,
 therefore we should consider adding a new first-class construct.

 I don't know much about the subject of first-class records, but so far
 I have come across a few styles of existing implementations in FP:
 structural typing, records as modules, and row types.
 I recently linked to Ur's extensible record impementation (that uses
 row types) from the wiki:
 http://adam.chlipala.net/papers/UrPLDI10/UrPLDI10.pdf

 We are trying to stay focused on the narrow issue of solving
 name-spacing. I think we can stay narrow if we do implement first
 class records but hold off for now on presenting any special
 capabilities to the programmer.

 At this point we are months into the records process without a clear
 way forward. I think we should be willing to take any workable
 implementation and just avoid exposing the implementation details for
 now. If anyone can lend a hand at figuring out SORF updates or
 determining if type inference of records in the Ur paper can be made
 to work in Haskell, that would be very helpful!

 Greg Weber

 On Sun, Feb 26, 2012 at 7:01 AM, Jerzy Karczmarczuk
 jerzy.karczmarc...@unicaen.fr wrote:
 Barney Hilken :

 Haskel is supposed to be a theoretically sound, cleanly designed language,
 and if we lose sight of this we might as well use C++.

 Well, since I have nothing to say about new records, I don't say anything,
 but I have the impression that when we got to this level of discussion, it
 is a beginning of the end. Veeery, very funny...

 Imagine an ecclesiastic General Council, and the Pope saying:

 Brothers Bishops! Our new dogmas must be absolutely flawless, pure and
 sound, otherwise we might as well become Muslims.


 Inchaa whatever.

 Jerzy Karczmarczuk
 Caen, France



 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-26 Thread Evan Laforge
On Sun, Feb 26, 2012 at 2:00 AM, wren ng thornton w...@freegeek.org wrote:
 I haven't been following all the different proposals out there, but the ones
 I did see before tuning-out all took the stance that for each given field
 either (1) this field name is unique and always clashes, or (2) this field
 name is shared and never clashes. This is problematic for a number of
 reasons. The particular reason I raised is that there are times when we
 would like for a field name to be shared, but only shared among a specified
 group of records and clashing with all other records (which may themselves
 form groups that share the name as well).

I had a proposal that, I think, wouldn't have that clash/no clash
distinction, because it doesn't have the notion of overloading a
single symbol ala typeclasses.  So I think it would sidestep that
whole problem.

Anyway, I copied it up at
http://hackage.haskell.org/trac/ghc/wiki/Records/SyntaxDirectedNameResolution
if only so I can feel like I said my thing and can stop mentioning it
:)

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-25 Thread Henrik Nilsson

Hi,

Just checking my understanding here as I have not followed this thread
in all its details.

Gabor Lehel wrote:

 I agree completely. This is what I like about DORF: the D stands for
 Declared, which is referring to the fact that the contracts are
 explicit. Record fields aren't automatically polymorphic based on
 their name and type, as with SORF, rather they are scoped and
 disambiguated in the same way as classes.

So, with both DORF and your variant of it, am I correct in understanding
that polymorphic fields, be it universally quantified as in

   data ARecordType a =
C1 {
...,
fieldX :: a,
...,
fieldY :: a - a,
...
}

or existentially quantified as in:

   data AnotherRecordType =
forall a . C2 {
...,
fieldU :: a,
...,
fieldV :: a - Int,
...
}

would no longer be possible?

Note that the type variable a in both cases scope just over the
constructor(s) of the data type in question. So any attempt at
declaring the types of the fields outside of this context,
be it explicitly with the fieldLabel notation, or implicitly as
per your proposal, would not be possible. E.g.

   fieldLabel fieldY :: a - a

would presumably mean

   fieldLabel fieldY :: forall a . a - a

resulting in ARecordType becoming second-order polymorphic
where the value of fieldY would *have* to be a polymorphic function,
which is very different from the original definition.

Similarly, the whole point with the existentially quantification is
to allow a number of fields to share some specific but arbitrary type,
which again means that any attempt to type these fields outside of the
context of the datatype to which they belong would result in something
different.

Note that fieldU and fieldV cannot be used as projection functions
due to escaped type variables, but that field selection by pattern
matching is perfectly fine.

Both constructions above are very useful and, I'd argue that a design
that rules them out actually is a rather poor fit for a language like
Haskell.

To be completely honest, I, at least, would be much happier keeping
working around the present limitations of Haskell's named fields by
picking my field names carefully, than losing the above.

Or am I missing something? E.g. is the idea that sharing of fields
only applies to fields of monomorphic type, i.e. whose type can be
declared globally?

Best,

/Henrik

--
Henrik Nilsson
School of Computer Science
The University of Nottingham
n...@cs.nott.ac.uk

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-25 Thread Gábor Lehel
2012/2/25 Gábor Lehel illiss...@gmail.com:
 Please correct me if I've misunderstood or mischaracterized any aspect of 
 DORF.

Okay, I did end up misunderstanding and mischaracterizing at least two
aspects of DORF.

Re-reading the wiki page:

http://hackage.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFields

it's clear that you would not have to write fieldLabel declarations
for every single field of every single record, only for the ones you
wish to be shared and usable polymorphically. By default, fields of
individual records would be specific to that record (monomorphic in
the type of the record), except if there is a fieldLabel declaration
for them in scope in which case they would be considered instances of
it. (I hope I have it right this time...)

So the difference between DORF and my variant would be:

DORF: Fields are record-specific (monomorphic in the record type) by
default; having a field be polymorphic requires writing a fieldLabel
declaration and having it in scope when the record is declared; if a
matching fieldLabel is in scope the field is automatically considered
shared and polymorphic in the record type. In other words, you have to
write the classes explicitly, but the instances are inferred
automatically.

Me: Declaring a record always implies fieldLabel declarations for each
of its fields (record-specific, monomorphic-in-the-record-type fields
are not possible); these are always *new* fieldLabels, which are not
considered to be the same as previous ones and cannot be used
interchangeably with them; to re-use an existing fieldLabel for a
field of your record you must use explicit syntax. In other words,
here the classes are automatic, but the instances are explicit.

It wasn't clear to me before that DORF retains record-monomorphic
fields, while my variant does away with them. In DORF you can
presumably still use a record-monomorphic field selector to help infer
the concrete type of the record (whereas with polymorphic fields
inference goes in the other direction). Also, while in both variants
it is possible to avoid re-using an existing field class for your
record, in my variant it's not possible to prevent a downstream record
from re-using your field class (whereas record-monomorphic fields by
definition can't have further instances). So in effect in DORF inside
of record declarations you can have two types of fields,
record-polymorphic and record-monomorphic, along with separate
top-level fieldLabel declarations to declare which ones are the
polymorphic fields; while in my variant inside of records you can have
two types of fields, classes and instances, with explicit syntax
to indicate which ones are the instances. Retaining record-monomorphic
fields seems like a flexibility-versus-consistency tradeoff: in DORF
you have two types of fields with opposite behaviour with respect to
type inference, whereas with my variant you only have one.

One troubling consequence of DORF -- again, if I'm understanding
things correctly -- is that due to implicit field instances a module
import can change the meaning of your program: a record field which
was considered record-monomorphic for lack of a matching fieldLabel
declaration will be considered polymorphic is one is imported. My
variant avoids this.

The other aspect of DORF which I mischaracterized in my previous email
is that fieldLabel declarations don't look like

fieldLabel name :: Text

rather, they look like

fieldLabel name :: r - Text

where r stands for the type of the record. The implications of this
are not clear to me. As Henrik's email helped me realize, I'm
completely clueless with regards to how type variables are scoped and
handled in DORF. I also don't know how my proposed modifications would
affect it. So I'll go back to reading the wiki some more and let
Anthony field Henrik's questions in the meantime, if he wants to. (One
thing that's obvious is that universally quantified polymorphic fields
*are* allowed in DORF, because a specific example is listed which uses
one. It's completely inconceivable to me that any record system
proposal could be adopted which required doing away with them.
Complete show-stopper.)

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-25 Thread Barney Hilken
After more pondering, I finally think I understand what the DORFistas want. 
Here is an example:

You want to define records which describe people, and include (among other 
things) a field called name. There might be several different record types 
with a name field, depending on whether the record refers to a customer, an 
employee, a business contact etc., but in each case name is the name of the 
person to which the record refers. You then write various functions which 
assume this, such as

   spam :: Has r name String = r - String
   spam r = Dear  ++ r.name ++ \nHave you heard...

Now I want to define records which describe products, and I also use a field 
name in the same way, except that it is the brand name of the product. I also 
define functions such as

   offer :: Has r name String = r - String
   offer r = Reduced!  ++ r.name ++  50% off!

It doesn't make any sense to apply your functions to my records or vice-versa, 
but because we both chose the same label, the compiler allows it. Putting the 
code in separate modules makes no difference, since labels are global.


Here is a simple solution, using SORF:

The real problem is that the polymorphism of spam and offer is too general. We 
should each define new classes

   class Has r name String = HasPersonalName r
   class Has r name String = HasBrandName r

and make  each of our record types an instance of this class

   instance HasPersonalName EmployeeRecord
   instance HasPersonalName CustomerRecord
   instance HasBrandName FoodRecord

then we can define functions with a more specific polymorphism

   spam :: HasPersonalName r = r - String
   spam r = Dear  ++ r.name ++ \nHave you heard...

   offer :: HasBrandName r = r - String
   offer r = Reduced!  ++ r.name ++  50% off!

Now there is no danger of confusing the two uses of name, because my records 
are not instances of HasPersonalName, they are instances of HasBrandName. You 
only use the class Has if you really want things to be polymorphic over all 
records, otherwise you use the more specific class.


This seems to me a much simpler approach than building the mechanism in to the 
language as DORF does, and it's also more general, because it isn't hard linked 
to the module system. Does it have any disadvantages?

Barney.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-25 Thread Gábor Lehel
On Sat, Feb 25, 2012 at 3:54 PM, Barney Hilken b.hil...@ntlworld.com wrote:
 After more pondering, I finally think I understand what the DORFistas want. 
 Here is an example:

 You want to define records which describe people, and include (among other 
 things) a field called name. There might be several different record types 
 with a name field, depending on whether the record refers to a customer, an 
 employee, a business contact etc., but in each case name is the name of the 
 person to which the record refers. You then write various functions which 
 assume this, such as

       spam :: Has r name String = r - String
       spam r = Dear  ++ r.name ++ \nHave you heard...

 Now I want to define records which describe products, and I also use a field 
 name in the same way, except that it is the brand name of the product. I 
 also define functions such as

       offer :: Has r name String = r - String
       offer r = Reduced!  ++ r.name ++  50% off!

 It doesn't make any sense to apply your functions to my records or 
 vice-versa, but because we both chose the same label, the compiler allows it. 
 Putting the code in separate modules makes no difference, since labels are 
 global.

Exactly!



 Here is a simple solution, using SORF:

 The real problem is that the polymorphism of spam and offer is too general. 
 We should each define new classes

       class Has r name String = HasPersonalName r
       class Has r name String = HasBrandName r

 and make  each of our record types an instance of this class

       instance HasPersonalName EmployeeRecord
       instance HasPersonalName CustomerRecord
       instance HasBrandName FoodRecord

 then we can define functions with a more specific polymorphism

       spam :: HasPersonalName r = r - String
       spam r = Dear  ++ r.name ++ \nHave you heard...

       offer :: HasBrandName r = r - String
       offer r = Reduced!  ++ r.name ++  50% off!

 Now there is no danger of confusing the two uses of name, because my 
 records are not instances of HasPersonalName, they are instances of 
 HasBrandName. You only use the class Has if you really want things to be 
 polymorphic over all records, otherwise you use the more specific class.


 This seems to me a much simpler approach than building the mechanism in to 
 the language as DORF does, and it's also more general, because it isn't hard 
 linked to the module system. Does it have any disadvantages?

I can't tell offhand whether it has any drawbacks with respect to
expressiveness. It seems to be a good solution to the stated problem,
so thank you for proposing it.

My objection is that I'm not sure if there is ever a case where you
really want things to be polymorphic over all records. There is
nothing (as far as I know) analogous to this kind of implicit
name-based polymorphism anywhere in Haskell. It doesn't seem like the
Haskell way to have the less safe thing as the one that's default and
convenient, and to allow the programmer to layer a more-safe thing on
top of it if he or she wants to. It seems more like the Haskell way to
have the safer thing be the default and to require extra work if you
want to do something less safe*. In this specific case, is there any
actual use case for global implicitly-polymorphic-by-name record
fields, where that is actually what you want, and where the DORFish
way which is analogous to classes-and-instances wouldn't be
appropriate?

* (Now granted, if pure code versus unsafePerformIO is white versus
black, then this is shade-of-gray versus
slightly-darker-shade-of-gray, but the principle is the same.)

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-25 Thread Isaac Dupree

On 02/25/2012 10:18 AM, Gábor Lehel wrote:

This seems to me a much simpler approach than building the mechanism in to the 
language as DORF does, and it's also more general, because it isn't hard linked 
to the module system. Does it have any disadvantages?


I can't tell offhand whether it has any drawbacks with respect to
expressiveness. It seems to be a good solution to the stated problem,
so thank you for proposing it.

My objection is that I'm not sure if there is ever a case where you
really want things to be polymorphic over all records. There is
nothing (as far as I know) analogous to this kind of implicit
name-based polymorphism anywhere in Haskell. [...]


True enough.  But DORF doesn't, IMHO, really solve this concern.  If you 
choose to use DORF, then your PersonalName and BrandNames will still be 
overloaded in just the way you don't want.  The only way to avoid this 
is a pretty arbitrary stylistic decision whether to use Haskell98-style 
field-name-prefixes or use new-style overloading.


Even SORF is better than, say, C++ overloading in the sense that adding 
another overload in SORF cannot cause code not to compile, nor change 
its behaviour.


Convince me otherwise.

-Isaac

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-25 Thread Gábor Lehel
On Sat, Feb 25, 2012 at 10:09 PM, Isaac Dupree
m...@isaac.cedarswampstudios.org wrote:
 On 02/25/2012 10:18 AM, Gábor Lehel wrote:

 This seems to me a much simpler approach than building the mechanism in
 to the language as DORF does, and it's also more general, because it isn't
 hard linked to the module system. Does it have any disadvantages?


 I can't tell offhand whether it has any drawbacks with respect to
 expressiveness. It seems to be a good solution to the stated problem,
 so thank you for proposing it.

 My objection is that I'm not sure if there is ever a case where you
 really want things to be polymorphic over all records. There is
 nothing (as far as I know) analogous to this kind of implicit
 name-based polymorphism anywhere in Haskell. [...]


 True enough.  But DORF doesn't, IMHO, really solve this concern.  If you
 choose to use DORF, then your PersonalName and BrandNames will still be
 overloaded in just the way you don't want.  The only way to avoid this is a
 pretty arbitrary stylistic decision whether to use Haskell98-style
 field-name-prefixes or use new-style overloading.

Could you elaborate on this? (What's the way I don't want? What do you
mean by field-name-prefixes versus new-style overloading?) With DORF I
have control over which fields are polymorphic over which records,
very much like how I have control over which classes are polymorphic
over which types. That's what I want.


 Even SORF is better than, say, C++ overloading in the sense that adding
 another overload in SORF cannot cause code not to compile, nor change its
 behaviour.

Sure.


 Convince me otherwise.


Your position seems to be that unless there is some kind of grave
blocking problem with SORF, then we should go with SORF. I don't
really understand this. I think we should go with the best solution
available. I think DORF is a better solution than SORF, so we should
rather go with DORF than SORF. You've just admitted that there is no
actual use case for the behaviour of SORF, as opposed to that of DORF.
What am I missing?

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-25 Thread Barney Hilken
 My objection is that I'm not sure if there is ever a case where you
 really want things to be polymorphic over all records.

Well, I don't have a simple, really convincing example, but there are certainly 
things I want to play with.
More importantly, DORF automatically attaches one class to each label, but this 
is often not what you want. For example, if you have two fields firstname and 
lastname the associated classes are less useful: what you really want is 

  class (Has r firstname String, Has r lastname String) = 
 HasPersonalName r

so that you can define

   fullname :: HasPersonalName r = r - String
   fullname r = r.firstname ++   ++ r.lastname

You may also want to define subclasses to express more specific conditions. In 
general, the compiler cannot automatically deduce what is semantically 
important: you need to define it yourself. The Has class is the base on which 
you can build.

 It doesn't seem like the
 Haskell way to have the less safe thing as the one that's default and
 convenient, and to allow the programmer to layer a more-safe thing on
 top of it if he or she wants to. It seems more like the Haskell way to
 have the safer thing be the default and to require extra work if you
 want to do something less safe*.

I think you are using the word safe in a slightly misleading way. None of 
this is mathematically unsafe, because projections are natural (truly 
polymorphic). The safety that is broken here is nothing to do with the 
semantics of the language, it is to do with the semantics of the system being 
implemented, and that is something the compiler cannot infer. As my example 
above shows, it doesn't always correspond one to one with the labels.
 
The Haskel way is to make things as polymorphic as is mathematically safe, even 
when this goes beyond the programmers original intention. You can then restrict 
this polymorphism by giving explicit less general types in the same way as in 
my examples. I think my approach is more Haskel like.

Another important Haskel design consideration is to reuse parts of the language 
where possible, rather than introduce new structures. Type classes were 
originally introduced to deal with equality and numeric functions, but were 
reused for many things including monads. My approach achieves the same as DORF 
(and more), but using existing language features instead of introducing new 
ones.

Barney.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-25 Thread Isaac Dupree

On 02/25/2012 05:10 PM, Gábor Lehel wrote:

Could you elaborate on this? (What's the way I don't want? What do you
mean by field-name-prefixes versus new-style overloading?) With DORF I
have control over which fields are polymorphic over which records,
very much like how I have control over which classes are polymorphic
over which types. That's what I want.


Darn, I misinterpreted DORF.  There was too much text and too many options.

Tell me if I'm correct:
A. Every declaration with record syntax creates Has instances for all 
fields [1].
B. Has, get and set may not be written by users (guessing due to 
representation-hiding fail).

C. You create functions using fieldLabel name [...]
D. which have the magical effect of, when in scope unqualified, causing 
data types defined with record syntax to be accessible through that 
particular fieldLabel function (and no other way).
E. (When two fieldLabels of the same name are in scope unqualified, 
declaring a record containing that name is an error.)
F. So adding an import (for some other reason for your code) that 
happens to include a fieldLabel can make your records accidentally be 
more visible, rather than be compile-error or no-effect.


I feel weird about record fields having an option that depends on 
whether something's in scope and cannot be controlled syntactically. 
Maybe we can fix that without making the syntax worse.


G. It is possible (but rather ugly) to use dot-notation when there are 
multiple fieldNames of the same name in scope. [2]


Hmm.  Maybe this is Haskelly as well as convenient enough.  Did I get 
everything right?  What do you think about my concern about F?


[1] 
http://hackage.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFields/ImplementorsView
[2] 
http://hackage.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFields/DotPostfix#UsingDotnotationamongstqualifiednames


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-25 Thread wren ng thornton

On 2/24/12 5:40 PM, Johan Tibell wrote:

I share Greg's concerns about polymorphic projections. For example,
given a function

 sort :: Ord a =  ...

we don't allow any 'a' that happens to export a operator that's
spelled= to be passed to 'sort'. We have the user explicitly create
an instance and thereby defining that their= is e.g. a strict weak
ordering and thus make sense when used with 'sort'. This explicitness
is useful, it communicates the contract of the function to the reader
and lets us catch mistakes in a way that automatically polymorphic
projections don't.

Automatically polymorphic projections feels like Go's structural
polymorphism, C++'s templates or C's automatic numeric coercions, and
I'm worried it'll lead to problems when used at scale. They're not
required to solve the problem we're trying to solve, so lets hurry
slowly and don't bake them in together with the namespacing problem.
At the very least use two different LANGUAGE pragmas so users can have
one without the other.


+1.

I'm not sure that I like the current proposals for how to control the 
non/automatic-ness of polymorphism (for reasons I can spell out later, 
if desired). But we definitely want to have something that's a bit more 
cultured than simply making all record projectors polymorphic over records.


--
Live well,
~wren

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-25 Thread wren ng thornton

On 2/25/12 10:18 AM, Gábor Lehel wrote:

On Sat, Feb 25, 2012 at 3:54 PM, Barney Hilkenb.hil...@ntlworld.com  wrote:

After more pondering, I finally think I understand what the DORFistas want. 
Here is an example:

You want to define records which describe people, and include (among other things) a field called 
name. There might be several different record types with a name field, depending on 
whether the record refers to a customer, an employee, a business contact etc., but in each case 
name is the name of the person to which the record refers. You then write various 
functions which assume this, such as


   spam :: Has r name String =  r -  String
   spam r = Dear  ++ r.name ++ \nHave you heard...


Now I want to define records which describe products, and I also use a field 
name in the same way, except that it is the brand name of the product. I also 
define functions such as


   offer :: Has r name String =  r -  String
   offer r = Reduced!  ++ r.name ++  50% off!


It doesn't make any sense to apply your functions to my records or vice-versa, 
but because we both chose the same label, the compiler allows it. Putting the 
code in separate modules makes no difference, since labels are global.


Exactly!


FWIW, this is the concern I alluded to earlier. Namely that we may want 
to have two (or more), er, 'classes' of records--- where a field is 
polymorphic over an individual class, but we don't want those classes to 
merge simply because they happened to choose the same name and type for 
the field.


I'm not sure it's a good proposal, but it seems like the only way to 
handle this issue is to (1) introduce a new kind for 
semantically-oriented field names, and (2) make the Has class use that 
kind rather than a type-level string. By (1), what I mean is that rather 
than referring to the field as name, we would declare PersonalName and 
BrandName and then use those in lieu of the string. And if we do that, 
then (2) demands that we must somehow make explicit which one we mean, 
should we want the `name` field to be polymorphic for some given record 
declaration.


--
Live well,
~wren

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-25 Thread Anthony Clayden
Whoa! suddenly a deluge over the DORF proposal.

I don't have time to reply fully now, but I must say: Barney
you have got it all wrong.

No, DORF does not attach one class to each label. There is
only one class 'Has', with methods get and set. Each record
decl generates an instance for the combination of
record/field. You can't mix declared and free-standing
labels in the same record. The switch for DORF is at the
module level: in a module either all records and labels use
DORF, or none do (that is, they use H98 style with each
field name being unique).

AntC

- Original Message Follows -
  My objection is that I'm not sure if there is ever a
  case where you really want things to be polymorphic
 over all records.
 
 Well, I don't have a simple, really convincing example,
 but there are certainly things I want to play with. More
 importantly, DORF automatically attaches one class to each
 label, but this is often not what you want. For example,
 if you have two fields firstname and lastname the
 associated classes are less useful: what you really want
 is 
 
   class (Has r firstname String, Has r lastname
 String) = HasPersonalName r
 
 so that you can define
 
 fullname :: HasPersonalName r = r - String
 fullname r = r.firstname ++   ++ r.lastname
 
 You may also want to define subclasses to express more
 specific conditions. In general, the compiler cannot
 automatically deduce what is semantically important: you
 need to define it yourself. The Has class is the base on
 which you can build.
 
  It doesn't seem like the
  Haskell way to have the less safe thing as the one
  that's default and convenient, and to allow the
  programmer to layer a more-safe thing on top of it if he
  or she wants to. It seems more like the Haskell way to
 have the safer thing be the default and to require extra
  work if you want to do something less safe*.
 
 I think you are using the word safe in a slightly
 misleading way. None of this is mathematically unsafe,
 because projections are natural (truly polymorphic). The
 safety that is broken here is nothing to do with the
 semantics of the language, it is to do with the semantics
 of the system being implemented, and that is something the
 compiler cannot infer. As my example above shows, it
 doesn't always correspond one to one with the labels.
  
 The Haskel way is to make things as polymorphic as is
 mathematically safe, even when this goes beyond the
 programmers original intention. You can then restrict this
 polymorphism by giving explicit less general types in the
 same way as in my examples. I think my approach is more
 Haskel like.
 
 Another important Haskel design consideration is to reuse
 parts of the language where possible, rather than
 introduce new structures. Type classes were originally
 introduced to deal with equality and numeric functions,
 but were reused for many things including monads. My
 approach achieves the same as DORF (and more), but using
 existing language features instead of introducing new
 ones.
 
 Barney.
 
 
 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org

http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-25 Thread Anthony Clayden

Wren/all

Please remember SPJ's request on the Records wiki to stick
to the namespace issue. We're trying to make something
better that H98's name clash. We are not trying to build
some ideal polymorphic record system.

To take the field labelled name: in H98 you have to
declare each record in a different module and import every
module into your application and always refer to name
prefixed by the module.

DORF doesn't stop you doing any of that. So if you think of
each name being a different meaning, carry on using
multiple modules and module prefixes. That's as easy (or
difficult) as under H98.

You can declare fieldLabel name in one module, import it
unqualified into another and declare more records with a
name label -- contrary to what somebody was claiming.

Or you can import fieldLabel name qualified, and use it as
a selector function on all record types declared using it.
It's just a function like any other imported/qualified
function, for crying out loud!

So if there's 'your' name label and 'my' name, then use
the module/qualification system as you would for any other
scoped name. Then trying to apply My.name to Your.record
will get an instance failure, as usual.

(And by the way, there's no DORFistas, let's avoid
personalising this. There are people who don't seem to
understand DORF -- both those criticising and those
supporting.)

AntC

- Original Message Follows -
 On 2/25/12 10:18 AM, Gábor Lehel wrote:
  On Sat, Feb 25, 2012 at 3:54 PM, Barney
 Hilkenb.hil...@ntlworld.com  wrote:  After more
 pondering, I finally think I understand what the DORFistas
 want. Here is an example: 
  You want to define records which describe people, and
 include (among other things) a field called name. There
 might be several different record types with a name field,
 depending on whether the record refers to a customer, an
 employee, a business contact etc., but in each case name
 is the name of the person to which the record refers. You
 then write various functions which assume this, such as 
 spam :: Has r name String =  r -  String
 spam r = Dear  ++ r.name ++ \nHave you
 heard... 
  Now I want to define records which describe products,
 and I also use a field name in the same way, except that
 it is the brand name of the product. I also define
 functions such as 
 offer :: Has r name String =  r -  String
 offer r = Reduced!  ++ r.name ++  50%
off!
 
  It doesn't make any sense to apply your functions to my
 records or vice-versa, but because we both chose the same
 label, the compiler allows it. Putting the code in
 separate modules makes no difference, since labels are
 global. 
  Exactly!
 
 FWIW, this is the concern I alluded to earlier. Namely
 that we may want  to have two (or more), er, 'classes' of
 records--- where a field is  polymorphic over an
 individual class, but we don't want those classes to 
 merge simply because they happened to choose the same name
 and type for  the field.
 
 I'm not sure it's a good proposal, but it seems like the
 only way to  handle this issue is to (1) introduce a new
 kind for  semantically-oriented field names, and (2) make
 the Has class use that  kind rather than a type-level
 string. By (1), what I mean is that rather  than referring
 to the field as name, we would declare PersonalName and 
 BrandName and then use those in lieu of the string. And if
 we do that,  then (2) demands that we must somehow make
 explicit which one we mean,  should we want the `name`
 field to be polymorphic for some given record 
 declaration.
 
 -- 
 Live well,
 ~wren
 
 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org

http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-24 Thread Greg Weber
I looked at the DORF implementers view
http://hackage.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFields/ImplementorsView

It appears that we still have no solution for record updates that
change the type of a polymorphic field.
I think this means we need to look to ways to solve this update issue
other than just de-sugaring to typeclasses.

On Thu, Feb 23, 2012 at 5:01 PM, Greg Weber g...@gregweber.info wrote:
 On Thu, Feb 23, 2012 at 4:25 PM, AntC anthony_clay...@clear.net.nz wrote:
 Greg Weber greg at gregweber.info writes:


 Thanks to Anthony for his DORF proposal, and spending time to clearly
 explain it on the wiki.

 I have a big issue with the approach as presented - it assumes that
 record fields with the same name should automatically be shared. As I
 have stated before, to avoid circular dependencies, one is forced to
 declare types in a single module in Haskell that would ideally be
 declared in a multiple modules. ...

 Thanks Greg, but I'm struggling to understand what the difficulty is with
 sharing the same name, or why your dependencies are circular. Would you be
 able to post some code snippets that illustrate what's going on (or what's
 failing to go on)?

 Or perhaps this is an experience from some application where you weren't 
 using
 Haskell? Could you at least describe what was in each record type?

 You can get an idea of things in the section 'Problems with using the
 module namespace mechanism' here:
 http://hackage.haskell.org/trac/ghc/wiki/Records?version=4
 The attachment that Chris Done left to demonstrate his types seems to
 be overwritten.
 I will bring back his text as it seems his point does need to be driven home.
 A lot of Haskell projects have a separate Types module to avoid issues
 with circular dependencies.


 Continuing the database example, I
 will have multiple tables with a 'name' column, but they do not have
 the same meaning.

 If I have a function:

   helloName person = Hello,  ++ person.name

 The compiler could infer that I want to say hello to inanimate objects!

 So the first question is:
 * do your fields labelled `name` all have the same type? (Perhaps all 
 String?)
 * what meaning does a name have beyond being a String?

 Your code snippet doesn't give the types, but if I guess that you intend
 `person` to be of type `Person`. Then you can give a signature:
     helloName :: Person - String

 If person can be 'anything' then the type inferred from the bare function
 equation would be:
     helloName :: r{ name :: String } = r - String

 So you could say hello to your cat, and your pet rock. You couldn't say hello
 to a pile of bricks (unless it's been given a name as an art installation in
 the Tate Gallery ;-)

 Of course we know that we can always add type annotations to clarify
 things. The question is whether we want to be opt-out and have to
 explain people that they can end up with weakly typed code when they
 don't want to share fields.




 Note that I am not completely against abstraction over fields, I just
 don't think it is the best default behavior.


 So what is the best default behaviour, and what is the range of other
 behaviours you want to support?

 I believe the best default is to not share fields, but instead have
 the programmer indicate at or outside of the record definition that
 they want to share fields. Basically just use type-classes how they
 are used now - as opt-in. But I am OK with making an especially easy
 way to do this with records if the current techniques for defining
 typeclasses are seen as to verbose.



 And the section Modules and qualified names for records shows that
 the proposal doesn't fully solve the name-spacing issue.


 I think it shows only that record field labels can run into accidental name
 clash in exactly the same way as everything else in Haskell (or indeed in any
 programming language). And that Haskell has a perfectly good way for dealing
 with that; and that DORF fits in with it.

 Greg, please give some specific examples! I'm trying to understand, but I'm
 only guessing from the fragments of info you're giving.

 AntC




 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-24 Thread Greg Weber
Actually, I looked at the SORF page again:
http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields
There is now an 'Alternative Proposal' section that claims to solve
polymorphic update.
If anyone has comments on this please let us know!

So my proposal would be to somehow use the SORF Alternative Proposal
to allow for name-spaced records. This should be used to generate
internal constraints and not be exposed to the end user and not
automatically abstract over fields. This leaves all of our future
options open while satisfying the narrow issue at hand.

On Fri, Feb 24, 2012 at 9:27 AM, Greg Weber g...@gregweber.info wrote:
 I looked at the DORF implementers view
 http://hackage.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFields/ImplementorsView

 It appears that we still have no solution for record updates that
 change the type of a polymorphic field.
 I think this means we need to look to ways to solve this update issue
 other than just de-sugaring to typeclasses.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-24 Thread Anthony Clayden
 Actually, I looked at the SORF page again:

http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields
 There is now an 'Alternative Proposal' section that claims
 to solve polymorphic update.
 If anyone has comments on this please let us know!

Yes, Greg the quasifunctor stuff has been there for some
time.

You seem to be confused between polymorphic fields vs
type-changing update. Quasifunctor is talking about
type-changing update. (Look at the example given for `data R
a`: there's no polymorphic field.)

DORF already has a working approach to type-changing update.

I make reference to quasifunctor in the DORF proposal.
Essentially, if quasifunctor can be made to work for SORF,
it can also be made to work for DORF. (Because what
underlies both is to use a Has class with methods get/set.)

No, I don't think anybody has a satisfactory approach to
updating polymorphic/higher-ranked fields. (DORF mentions
one, but it's a ghastly hack.

In addition to higher-ranked types, SPJ is also concerned
about h-r's with constraints. I've added a very speculative
piece in DORF's comparison to SORF that considers using
Constraint Kinds. (It's probably piling ghastliness upon
ghastliness.)

 ... and not automatically
 abstract over fields. This leaves all of our future
 options open while satisfying the narrow issue at hand.
 

What on earth do you mean by not automatically abstract
over fields?

AntC


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-24 Thread Barney Hilken


 This should be used to generate
 internal constraints and not be exposed to the end user and not
 automatically abstract over fields.

Every one of your messages about records stresses your dislike for polymorphic 
projections, and your insistence that the Has class should be hidden from the 
user. I've read all of your explanations, but I'm still totally unconvinced. 
All your arguments about the semantics of labels are based on the way you want 
to use them, not on what they are. They are projection functions! Semantically, 
the only difference between them is the types. Polymorphism makes perfect sense 
and is completely natural. There is nothing untyped about it.

I feel you are pushing a narrow personal agenda here. I think the Has class 
would be useful to programmers and no harder to understand than other Haskel 
classes. It should not be hidden.

Barney.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-24 Thread Greg Weber
On Fri, Feb 24, 2012 at 2:00 PM, Barney Hilken b.hil...@ntlworld.com wrote:


 This should be used to generate
 internal constraints and not be exposed to the end user and not
 automatically abstract over fields.

 Every one of your messages about records stresses your dislike for 
 polymorphic projections, and your insistence that the Has class should be 
 hidden from the user. I've read all of your explanations, but I'm still 
 totally unconvinced. All your arguments about the semantics of labels are 
 based on the way you want to use them, not on what they are. They are 
 projection functions! Semantically, the only difference between them is the 
 types. Polymorphism makes perfect sense and is completely natural. There is 
 nothing untyped about it.

 I feel you are pushing a narrow personal agenda here. I think the Has class 
 would be useful to programmers and no harder to understand than other Haskel 
 classes. It should not be hidden.

Barney, these kinds of statements about personal agenda are
counterproductive. I have put in a lot of work that nobody, including
yourself has been willing to do for years to push Haskell's records
forward. I would appreciate respectful disagreement - I think I have
earned that much. Please just stick to logical arguments.

Greg Weber

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-24 Thread Greg Weber

 No, I don't think anybody has a satisfactory approach to
 updating polymorphic/higher-ranked fields. (DORF mentions
 one, but it's a ghastly hack.

So are the proposals dead until this is tackled, or should SORF/DORF
propose not to allow that?


 What on earth do you mean by not automatically abstract
 over fields?

Abstraction over fields is the ability to write a function that works
on two records with the same field label.

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-24 Thread Barney Hilken
I'm sorry Greg, I didn't mean to be offensive. I just feel that all your 
arguments in favour of this restriction are based on one particular application 
of records, rather than a general notion of what records are. Obviously this 
application is what motivates you to do all the valuable work you have done, 
and I appreciate that. But people are going to use records in many different 
ways, and I don't think that a restriction which makes perfect sense in your 
application should be allowed to restrict the ways other people want to write 
programs.

Barney.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-24 Thread Johan Tibell
Hi Barney,

On Fri, Feb 24, 2012 at 2:00 PM, Barney Hilken b.hil...@ntlworld.com wrote:
 Every one of your messages about records stresses your dislike for 
 polymorphic projections, and your insistence that the Has class should be 
 hidden from the user. I've read all of your explanations, but I'm still 
 totally unconvinced. All your arguments about the semantics of labels are 
 based on the way you want to use them, not on what they are. They are 
 projection functions! Semantically, the only difference between them is the 
 types. Polymorphism makes perfect sense and is completely natural. There is 
 nothing untyped about it.

I share Greg's concerns about polymorphic projections. For example,
given a function

sort :: Ord a = ...

we don't allow any 'a' that happens to export a operator that's
spelled = to be passed to 'sort'. We have the user explicitly create
an instance and thereby defining that their = is e.g. a strict weak
ordering and thus make sense when used with 'sort'. This explicitness
is useful, it communicates the contract of the function to the reader
and lets us catch mistakes in a way that automatically polymorphic
projections don't.

Automatically polymorphic projections feels like Go's structural
polymorphism, C++'s templates or C's automatic numeric coercions, and
I'm worried it'll lead to problems when used at scale. They're not
required to solve the problem we're trying to solve, so lets hurry
slowly and don't bake them in together with the namespacing problem.
At the very least use two different LANGUAGE pragmas so users can have
one without the other.

Cheers,
  Johan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-24 Thread Barney Hilken
 I share Greg's concerns about polymorphic projections. For example,
 given a function
 
sort :: Ord a = ...
 
 we don't allow any 'a' that happens to export a operator that's
 spelled = to be passed to 'sort'. We have the user explicitly create
 an instance and thereby defining that their = is e.g. a strict weak
 ordering and thus make sense when used with 'sort'. This explicitness
 is useful, it communicates the contract of the function to the reader
 and lets us catch mistakes in a way that automatically polymorphic
 projections don't.

But the difference is that = is (potentially) an arbitrary function, so we 
need to be careful that the instances make sense. A formally correct system 
would insist that all instances are (at least) reflexive and transitive. But in 
the case of records, we know what the instances are: they are projection 
functions. Every single (automatically generated) instance does exactly the 
same thing: it projects out one component of a record. This isn't like OO 
polymorphism, where messages are actually arbitrary functions which could do 
anything, the polymorphism is exactly the same as that of fst and snd.

 At the very least use two different LANGUAGE pragmas so users can have
 one without the other.

This I can agree with. It was the way that Greg mentioned it in every single 
email which was starting to worry me.

Barney.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-24 Thread Greg Weber
Sorry for getting offended!

I agree with your sentiment, which is why I thought having to write an
extra word in a Record declaration to get automatic abstraction over
fields was a good solution to the problem.

But now I am in the position of being the lone dissenter on automatic
abstraction over fields. And I am in disagreement with SPJ, and I
would gladly drop this entire process and just ask him to be a
benevolent dictator on the subject.

So I am ok with automatic abstraction over fields. Ideally we should
have a way to disable it, just as I was proposing we have a way to
enable it. But perhaps having this will help force a solution to the
circular references issues that makes automatic field abstraction
problematic in the first place. After all, Haskell did start with
barely any way to do IO.

On Fri, Feb 24, 2012 at 2:26 PM, Barney Hilken b.hil...@ntlworld.com wrote:
 I'm sorry Greg, I didn't mean to be offensive. I just feel that all your 
 arguments in favour of this restriction are based on one particular 
 application of records, rather than a general notion of what records are. 
 Obviously this application is what motivates you to do all the valuable work 
 you have done, and I appreciate that. But people are going to use records in 
 many different ways, and I don't think that a restriction which makes perfect 
 sense in your application should be allowed to restrict the ways other people 
 want to write programs.

 Barney.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-24 Thread Johan Tibell
On Fri, Feb 24, 2012 at 3:07 PM, Barney Hilken b.hil...@ntlworld.com wrote:
 But the difference is that = is (potentially) an arbitrary function, so we 
 need to be careful that the instances make sense. A formally correct system 
 would insist that all instances are (at least) reflexive and transitive. But 
 in the case of records, we know what the instances are: they are projection 
 functions. Every single (automatically generated) instance does exactly the 
 same thing: it projects out one component of a record. This isn't like OO 
 polymorphism, where messages are actually arbitrary functions which could 
 do anything, the polymorphism is exactly the same as that of fst and snd.

I appreciate the difference and it might be enough of a difference
(from e.g. OO systems) that the problems seen there won't show up in
Haskell under a new record system.

Aside: It is possible to have no scalar fields in records of course.
data R = C { compare :: (a - a - Ordering) }

-- Johan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-24 Thread J. Garrett Morris
On Fri, Feb 24, 2012 at 4:15 PM, Johan Tibell johan.tib...@gmail.com wrote:
 Aside: It is possible to have no scalar fields in records of course.
 data R = C { compare :: (a - a - Ordering) }

Has x f has no semantic content besides type x having an f field; Ord
has (at least in the programmer's mind, even if the language can't check
it) meaning beyond the simple presence of a compare function.

 /g

--
Would you be so kind as to remove the apricots from the mashed potatoes?

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-24 Thread Gábor Lehel
On Fri, Feb 24, 2012 at 11:40 PM, Johan Tibell johan.tib...@gmail.com wrote:
 Hi Barney,

 On Fri, Feb 24, 2012 at 2:00 PM, Barney Hilken b.hil...@ntlworld.com wrote:
 Every one of your messages about records stresses your dislike for 
 polymorphic projections, and your insistence that the Has class should be 
 hidden from the user. I've read all of your explanations, but I'm still 
 totally unconvinced. All your arguments about the semantics of labels are 
 based on the way you want to use them, not on what they are. They are 
 projection functions! Semantically, the only difference between them is the 
 types. Polymorphism makes perfect sense and is completely natural. There is 
 nothing untyped about it.

 I share Greg's concerns about polymorphic projections. For example,
 given a function

    sort :: Ord a = ...

 we don't allow any 'a' that happens to export a operator that's
 spelled = to be passed to 'sort'. We have the user explicitly create
 an instance and thereby defining that their = is e.g. a strict weak
 ordering and thus make sense when used with 'sort'. This explicitness
 is useful, it communicates the contract of the function to the reader
 and lets us catch mistakes in a way that automatically polymorphic
 projections don't.

 Automatically polymorphic projections feels like Go's structural
 polymorphism, C++'s templates or C's automatic numeric coercions, and
 I'm worried it'll lead to problems when used at scale. They're not
 required to solve the problem we're trying to solve, so lets hurry
 slowly and don't bake them in together with the namespacing problem.
 At the very least use two different LANGUAGE pragmas so users can have
 one without the other.

I agree completely. This is what I like about DORF: the D stands for
Declared, which is referring to the fact that the contracts are
explicit. Record fields aren't automatically polymorphic based on
their name and type, as with SORF, rather they are scoped and
disambiguated in the same way as classes. My only discontentments are
that it requires top-level declarations for each record field, which
feels excessive, and that the polymorphism is opt-out, in other words
if you declare a record with a given field and a field with that
name/type is already in scope, it is automatically considered to be an
instance of the same field. (Not that this is not the same as SORF,
because if more than one field with the same name/type is in scope you
can (and have to) use the usual explicit module qualification to
disambiguate them, and you can write a new top-level field declaration
to make it explicit that you're not re-using the field which was in
scope.)

I think both of these problems could be solved at the same time if (a)
instead of requiring explicit top-level declarations for fields,
declaring a record would also automatically declare its fields, as
distinct from any other fields which may have been in scope, and (b)
there would be some lightweight syntax you could use within record
declarations to specify that you do want to re-use the record field in
scope instead of declaring a new one. This has the added benefit that
record declarations as currently written would continue to have the
same meaning as they currently have. (For the record, I don't see any
harm in also allowing explicit top-level field declarations, outside
of records, it's the requirement for them which seems onerous.)



So in DORF, if you want to declare a Contact record with a name, a
phone number, and an address, you would write:

fieldLabel name :: Text
fieldLabel phoneNumber :: PhoneNumber
fieldLabel address :: Address

data Contact = Contact { name :: Text, phoneNumber :: PhoneNumber,
address :: Address }
-- it's unclear whether the type annotations would be belong in the
field declarations, in the record, or in both

then if you also want to keep track of people as employees, you write

fieldLabel position :: Position
fieldLabel salary :: Word

data Employee = Employee { name :: Text, position :: Position, salary :: Word }

And the name field would automatically be shared between them, and
could be used polymorphically with either record.

but then if you later write...

data City = City { name :: Text}

that would also automatically re-use the name field, but that would
clearly be wrong. It could be avoided by explicitly declaring a new
name field beforehand. (I suppose this aspect of the complaint might
be overblown, because as you can see when want a new field you always
write a fieldLabel declaration, and if you don't you're implying that
you're intending to use the existing one. But it's still very
verbose.)



In my variant of the proposal, declaring the Contact record would look
like this:

data Contact = Contact { name :: Text, position :: Position, salary :: Int }

This would automatically declare the name, position, and salary fields
with their associated types (equivalently to the fieldLabel
declarations from the previous example).

Then for Employee 

Re: Records in Haskell

2012-02-23 Thread AntC
Greg Weber greg at gregweber.info writes:

 
 Thanks to Anthony for his DORF proposal, and spending time to clearly
 explain it on the wiki.
 
 I have a big issue with the approach as presented - it assumes that
 record fields with the same name should automatically be shared. As I
 have stated before, to avoid circular dependencies, one is forced to
 declare types in a single module in Haskell that would ideally be
 declared in a multiple modules. ...

Thanks Greg, but I'm struggling to understand what the difficulty is with 
sharing the same name, or why your dependencies are circular. Would you be 
able to post some code snippets that illustrate what's going on (or what's 
failing to go on)?

Or perhaps this is an experience from some application where you weren't using 
Haskell? Could you at least describe what was in each record type?

 Continuing the database example, I
 will have multiple tables with a 'name' column, but they do not have
 the same meaning.
 
 If I have a function:
 
   helloName person = Hello,  ++ person.name
 
 The compiler could infer that I want to say hello to inanimate objects!

So the first question is:
* do your fields labelled `name` all have the same type? (Perhaps all String?)
* what meaning does a name have beyond being a String?

Your code snippet doesn't give the types, but if I guess that you intend 
`person` to be of type `Person`. Then you can give a signature:
 helloName :: Person - String

If person can be 'anything' then the type inferred from the bare function 
equation would be:
 helloName :: r{ name :: String } = r - String

So you could say hello to your cat, and your pet rock. You couldn't say hello 
to a pile of bricks (unless it's been given a name as an art installation in 
the Tate Gallery ;-)



 Note that I am not completely against abstraction over fields, I just
 don't think it is the best default behavior.
 

So what is the best default behaviour, and what is the range of other 
behaviours you want to support?


 And the section Modules and qualified names for records shows that
 the proposal doesn't fully solve the name-spacing issue.
 

I think it shows only that record field labels can run into accidental name 
clash in exactly the same way as everything else in Haskell (or indeed in any 
programming language). And that Haskell has a perfectly good way for dealing 
with that; and that DORF fits in with it.

Greg, please give some specific examples! I'm trying to understand, but I'm 
only guessing from the fragments of info you're giving.

AntC




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-23 Thread Greg Weber
On Thu, Feb 23, 2012 at 4:25 PM, AntC anthony_clay...@clear.net.nz wrote:
 Greg Weber greg at gregweber.info writes:


 Thanks to Anthony for his DORF proposal, and spending time to clearly
 explain it on the wiki.

 I have a big issue with the approach as presented - it assumes that
 record fields with the same name should automatically be shared. As I
 have stated before, to avoid circular dependencies, one is forced to
 declare types in a single module in Haskell that would ideally be
 declared in a multiple modules. ...

 Thanks Greg, but I'm struggling to understand what the difficulty is with
 sharing the same name, or why your dependencies are circular. Would you be
 able to post some code snippets that illustrate what's going on (or what's
 failing to go on)?

 Or perhaps this is an experience from some application where you weren't using
 Haskell? Could you at least describe what was in each record type?

You can get an idea of things in the section 'Problems with using the
module namespace mechanism' here:
http://hackage.haskell.org/trac/ghc/wiki/Records?version=4
The attachment that Chris Done left to demonstrate his types seems to
be overwritten.
I will bring back his text as it seems his point does need to be driven home.
A lot of Haskell projects have a separate Types module to avoid issues
with circular dependencies.


 Continuing the database example, I
 will have multiple tables with a 'name' column, but they do not have
 the same meaning.

 If I have a function:

   helloName person = Hello,  ++ person.name

 The compiler could infer that I want to say hello to inanimate objects!

 So the first question is:
 * do your fields labelled `name` all have the same type? (Perhaps all String?)
 * what meaning does a name have beyond being a String?

 Your code snippet doesn't give the types, but if I guess that you intend
 `person` to be of type `Person`. Then you can give a signature:
     helloName :: Person - String

 If person can be 'anything' then the type inferred from the bare function
 equation would be:
     helloName :: r{ name :: String } = r - String

 So you could say hello to your cat, and your pet rock. You couldn't say hello
 to a pile of bricks (unless it's been given a name as an art installation in
 the Tate Gallery ;-)

Of course we know that we can always add type annotations to clarify
things. The question is whether we want to be opt-out and have to
explain people that they can end up with weakly typed code when they
don't want to share fields.




 Note that I am not completely against abstraction over fields, I just
 don't think it is the best default behavior.


 So what is the best default behaviour, and what is the range of other
 behaviours you want to support?

I believe the best default is to not share fields, but instead have
the programmer indicate at or outside of the record definition that
they want to share fields. Basically just use type-classes how they
are used now - as opt-in. But I am OK with making an especially easy
way to do this with records if the current techniques for defining
typeclasses are seen as to verbose.



 And the section Modules and qualified names for records shows that
 the proposal doesn't fully solve the name-spacing issue.


 I think it shows only that record field labels can run into accidental name
 clash in exactly the same way as everything else in Haskell (or indeed in any
 programming language). And that Haskell has a perfectly good way for dealing
 with that; and that DORF fits in with it.

 Greg, please give some specific examples! I'm trying to understand, but I'm
 only guessing from the fragments of info you're giving.

 AntC




 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-22 Thread Greg Weber
Thanks to Anthony for his DORF proposal, and spending time to clearly
explain it on the wiki.

I have looked over the main page:
http://hackage.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFields

I have a big issue with the approach as presented - it assumes that
record fields with the same name should automatically be shared. As I
have stated before, to avoid circular dependencies, one is forced to
declare types in a single module in Haskell that would ideally be
declared in a multiple modules. Continuing the database example, I
will have multiple tables with a 'name' column, but they do not have
the same meaning.

If I have a function:

  helloName person = Hello,  ++ person.name

The compiler could infer that I want to say hello to inanimate objects!
Note that I am not completely against abstraction over fields, I just
don't think it is the best default behavior.

And the section Modules and qualified names for records shows that
the proposal doesn't fully solve the name-spacing issue.

On Sat, Feb 11, 2012 at 9:43 PM, wren ng thornton w...@freegeek.org wrote:
 On 2/11/12 9:47 AM, Greg Weber wrote:

 What is the use case for changing the type of a record field on update?


 I use it all the time. One may just as well ask: What is the use case for
 changing the type of a Maybe on update? Or what is the use case for changing
 only one of the types for a tuple update? Records are no different from any
 other polymorphic data type. We don't require special syntax for changing
 Maybe A to Maybe B nor for changing (A,B) to (A,C), so why should we treat
 records any differently?

 One particular use case is for records with phantom types. For example, say
 I have a large record which contains both real data and also some
 memoization caches for common queries. I don't want to have to recompute the
 caches every time I make a small change to the data, instead I'd like to
 just be able to flip a type-level bit that says now the caches are dirty.
 This way I can make a bunch of small changes to the main data, and then only
 once that is completed I will recompute the caches (because all the querying
 functions require that the type-level bit says the caches are
 clean/up-to-date).

 There are, of course, non-phantom examples as well. For instance, consider a
 record type for dealing with both sound and unsound versions of some kind of
 yucky data. For example, dealing with commandline argument handling. For the
 first stage of handling we just want to store the raw Strings (or [String]s,
 Maybe Strings,...); but then we'll want to parse those strings into ADTs and
 also perform some sanity checks to make sure the whole configuration is
 sane. One option of doing this is to have our record parametrized by the
 types of the fields, so we're converting from (Yuck String String ...) into
 (Yuck Bool (Maybe (Int,Float)) ...).

 For both of these examples it would be possible to monomorphize things to
 have DataDirty/DataClean or YuckRaw/YuckSane. For the latter example, that's
 probably a better choice; but for the former example it is demonstrably
 inferior. And of course it is easy to construct additional examples where
 monomorphization is not actually feasible.

 The reason to do this sort of thing as polymorphic records is so that you
 can simultaneously have some functions which care about the type parameters,
 and other functions which do not. Without type-changing updates the only way
 to achieve this is with some convoluted hack like defining a type class over
 all the monomorphic records (and duplicating all the neigh-identical record
 definitions), or using data families which are non-portable. Neither of
 those hacks says what you mean, and both require much more sophisticated
 type analysis than just using type-changing update for records.
 Type-changing update should not be removed, and rendering it into something
 distinct from type-unchanging record update is only coherent for phantom
 type uses of type changes and so cannot apply to non-phantom uses.

 --
 Live well,
 ~wren


 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-11 Thread wren ng thornton

On 2/11/12 9:47 AM, Greg Weber wrote:

What is the use case for changing the type of a record field on update?


I use it all the time. One may just as well ask: What is the use case 
for changing the type of a Maybe on update? Or what is the use case for 
changing only one of the types for a tuple update? Records are no 
different from any other polymorphic data type. We don't require special 
syntax for changing Maybe A to Maybe B nor for changing (A,B) to (A,C), 
so why should we treat records any differently?


One particular use case is for records with phantom types. For example, 
say I have a large record which contains both real data and also some 
memoization caches for common queries. I don't want to have to recompute 
the caches every time I make a small change to the data, instead I'd 
like to just be able to flip a type-level bit that says now the caches 
are dirty. This way I can make a bunch of small changes to the main 
data, and then only once that is completed I will recompute the caches 
(because all the querying functions require that the type-level bit says 
the caches are clean/up-to-date).


There are, of course, non-phantom examples as well. For instance, 
consider a record type for dealing with both sound and unsound versions 
of some kind of yucky data. For example, dealing with commandline 
argument handling. For the first stage of handling we just want to store 
the raw Strings (or [String]s, Maybe Strings,...); but then we'll want 
to parse those strings into ADTs and also perform some sanity checks to 
make sure the whole configuration is sane. One option of doing this is 
to have our record parametrized by the types of the fields, so we're 
converting from (Yuck String String ...) into (Yuck Bool (Maybe 
(Int,Float)) ...).


For both of these examples it would be possible to monomorphize things 
to have DataDirty/DataClean or YuckRaw/YuckSane. For the latter example, 
that's probably a better choice; but for the former example it is 
demonstrably inferior. And of course it is easy to construct additional 
examples where monomorphization is not actually feasible.


The reason to do this sort of thing as polymorphic records is so that 
you can simultaneously have some functions which care about the type 
parameters, and other functions which do not. Without type-changing 
updates the only way to achieve this is with some convoluted hack like 
defining a type class over all the monomorphic records (and duplicating 
all the neigh-identical record definitions), or using data families 
which are non-portable. Neither of those hacks says what you mean, and 
both require much more sophisticated type analysis than just using 
type-changing update for records. Type-changing update should not be 
removed, and rendering it into something distinct from type-unchanging 
record update is only coherent for phantom type uses of type changes and 
so cannot apply to non-phantom uses.


--
Live well,
~wren

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell - namespacing for fields

2012-01-31 Thread John Lask

On 1/02/2012 12:26 AM, AntC wrote:

Simon Peyton-Jonessimonpjat  microsoft.com  writes:



Beyond that, it would really help namespacing in general to appropriately

extend the module system to allow multiple modules to be declared within a
single file -- or, better yet, submodules. [snip] the added expressive power
should make namespacing issues

  much more tractable. [snip] this isn't about implementing records as such --

  rather, it's about generally extending the expressive power of the language
so that record systems--among other things--are easier to write.


I’m agnostic about nested modules.  In principle they would be a good

thing.  However, for the application to records in particular, ...

No! no! no! For records _don't_ put records in nested/sub-modules, and _don't_
require them in separate modules (as currently). Here's how ...




namespace management and record systems are intimately related, but as 
you assert distinct issues.


taking your example of Customer_id, I expressly want to be able to 
define in the same syntactic module (file). Two records with a field 
with exactly the same name at two different types, say Customer_id :: 
Int and Customer_id :: String. As I understand it, your proposal

would not enable this.

In reality these two different uses of the name Customer_id are 
unrelated and distinct. Some would argue that therefore they should 
rightly have distinct names, however that is moot, for the purpose of 
this discussion lets take this as the objective.


There are two roads to travel: Customer_id is one semantic entity (in 
the sense of an overloaded field, disambiguated by the type system) or 
Customer_id represents two distinct semantic entities disambiguated 
syntactically. I for one favor the second approach as it matches my 
intent, can I say, more faithfully.



[There may be other reasons for nested/sub-modules, but records ain't it.]


as above, however, I believe the best approach is to explore both paths 
and perhaps extend Haskell in both directions.


As in your example the particular construction of which suits, the 
approach offered in your email better because it matches your intent 
more faithfully. It is this ability to match construction with intent 
that is critical, which alludes to the notion of the expressivity of a 
language.



The reason was hinted at way back in Chris Done's attachment to the original
Records wiki http://hackage.haskell.org/trac/ghc/wiki/Records types in a non-
trivial project.

Let's say I have a database application with a field (meaning type)
customer_id. Then it appears in records for name and address, pricing, order
entry, etc. This is not a name 'clash', it's 'intended sharing'. (It really
galls me to even put it that way for explanatory purposes. Really it's the
**same** customer_id.)

In data model design you'd typically go about identifying all the fields
(types aka attributes) and putting them in a data dictionary. Then you'd
construct your records from them.

You might (possibly) put the data dictionary in a distinct module, for easy
maintenance. But you'd certainly want all the customer-related records in the
same module.

So a data decl:
  data Customer_NameAddress = Cust_NA { customer_id :: Int, ... }

is _not_ declaring customer_id, it's _using_ an already-declared field.
(Actually, if you've got any sense, you'll declare:
  newtype Customer_id = Customer_id Int
  data ... = { customer_id :: Customer_id, ... }
and that takes us to Type-indexed records and then a short hop to anonymous
tuples and polymorphic records and tuple extension/concat/merge and ... one
day!)

The other purpose of the data dictionary is to declare what DBMS's call
the 'domain' of the field (Int in the case of customer_id). The terminology's
going to get a bit confusing here: Haskell's field name (selector functions)
apply to the record as the function's domain, and Int as the result (range).

For Haskell's field selectors we might also want to constrain the records they
can be used in. (For example they must be 'Persist'able so that we can write
them to an external database.)

So, to the proposal (I'm assuming http://www.haskell.org/pipermail/glasgow-
haskell-users/2011-December/021298.html can be made workable. SPJ has been
kind enough to give it a once-over http://www.haskell.org/pipermail/glasgow-
haskell-users/2012-January/021744.html, but of course all faults are the
author's alone.) Specifically, there's to be a class Has with methods get and
set. This is grossly simplified, see the posts):

0.class Has r fld t  where-- record r has field fld at type t
 get :: r -  fld -  t
 set :: fld -  t -  r -  r'  -- update t into r
   -- the r' is to cater for type-changing updates
   -- it's actually a type function over r fld t

And then:
1. We need -XDisambiguateRecordFields,
so that we can talk about specific record types 

Re: Records in Haskell - namespacing for fields

2012-01-31 Thread AntC
John Lask jvlask at hotmail.com writes:

 
 On 1/02/2012 12:26 AM, AntC wrote:
 
  No! no! no! For records _don't_ put records in nested/sub-modules, and 
_don't_
  require them in separate modules (as currently). Here's how ...
 
 
 namespace management and record systems are intimately related, but as 
 you assert distinct issues.
 
 taking your example of Customer_id, I expressly want to be able to 
 define in the same syntactic module (file). Two records with a field 
 with exactly the same name at two different types, say Customer_id :: 
 Int and Customer_id :: String. As I understand it, your proposal
 would not enable this.

John, you can't do this now. You can't even define in the same module two 
records with the same name with the _same_ type.

[I'd better check first why you've put Customer_id with upper case, that's a 
type, not a field. Are you sure what you're talking about?]

I'd certainly dispute that there's anything sensible in doing that (why? 
what's the use case?), and I guess my proposal is not making that easy, but it 
is possible (and in fact no more difficult than a 'fixed' type for a field).

You'd go:
  field customer_id :: r - t  -- which is the same as no type spec at all
Then:
  data CustInt = CustInt { customer_id :: Int, ... }
  data CustString = CustString { customer_id :: String, ... }
It doesn't stop you also doing:
  data CustBool = CustBool { customer_id :: Bool, ... }

(I could contrive that the domain of customer_id is constrained to only Int or 
String, but why?)

 
 In reality these two different uses of the name Customer_id are 
 unrelated and distinct. Some would argue that therefore they should 
 rightly have distinct names, however that is moot, for the purpose of 
 this discussion lets take this as the objective.

No, I'm not going to take that as an objective. You'd have to provide a much 
better motivation for wanting to have two identical names in the same scope 
that are unrelated and distinct.

I'd agree with the 'some' that you should use two different names. Or if this 
is an 'accidental' clash of names from developers working separately (this is 
my distant planet example), then the clash is not special for field names, and 
it's what the module system is for. Use My.customer_id and Their.customer_id.

 
 There are two roads to travel: Customer_id is one semantic entity (in 
 the sense of an overloaded field, disambiguated by the type system) or 
 Customer_id represents two distinct semantic entities disambiguated 
 syntactically. I for one favor the second approach as it matches my 
 intent, can I say, more faithfully.
 

I'm interested to know how you disambiguate syntactically distinct entities 
with identical names in the same scope.

  [There may be other reasons for nested/sub-modules, but records ain't it.]
 
 as above, however, I believe the best approach is to explore both paths 
 and perhaps extend Haskell in both directions.
 

No, we can't afford to keep exploring multiple paths. What happens in fact is 
that it's extremely hard work, there's very few people who can actually 
implement it, they (frankly) are not very interested when there's so many 
other exciting possible developments. (Record systems don't count as exciting 
for programming language research: the ground is already well covered.) 
There's barely the resourcing to extend Haskell in just one way, and only 
providing the change is minimal.
Haskell 98's record system has been a wart since -- errm -- 1996.


 As in your example the particular construction of which suits, the 
 approach offered in your email better because it matches your intent 
 more faithfully. It is this ability to match construction with intent 
 that is critical, which alludes to the notion of the expressivity of a 
 language.
 

Eh? This sounds like metaphysics.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-01-28 Thread Anthony Clayden
 On Thu, Jan 26, 2012 at 8:02 PM, AntC
 anthony_clay...@clear.net.nz wrote:
 
  Ryan Newton rrnewton at gmail.com writes:
 
  
   I admit I'm a big fan of polymorphic extension.  But I
  don't love it enough
  for it to impede progress!
 
 
 Records proposals for Haskell have repeatedly foundered on
 the rocks of extensibility.  Meanwhile, it seems like
 years of experience with field extensibility in OO
 languages has shown that it's not an especially good idea,
 with authors on programming practice militating for
 information hiding instead.
 
 I don't think it's worth treading that path yet again in
 Haskell.
 
 -Jan

Jan, I agree we shouldn't try leaping forward to
extensibility yet. I disagree that we should abandon any
thoughts of it and produce a stopgap approach for records
that won't ever be extensible.

The 'proof of concept' I posted to the list last month
http://www.haskell.org/pipermail/glasgow-haskell-users/2011-December/021298.html
already includes get and set over polymorphic records and
polymorphic fields, I believe, which is half way there.

I love Haskell for the way it learns from well-structured
mathematical approaches -- in the way I feel OO doesn't.
There is a well-structured mathematically sound approach for
extensibility, as it happens. It dates back to 1969. It is
the 'engine' behind large-scale programming systems all over
the world every day. I do and have worked with a lot of
them.

It's called Relational Algebra, it's based on set theory,
it's declarative - which should fit smoothly with Haskell.
It has an operation to extend records called 'extend' (!).
It has an operation to merge records called 'join'. It has
an operation to concatenate records called 'cross-product'

You'll probably know it by its 'awkward cousin' SQL. There
are many reasons for hating SQL, and there are many reasons
why it's a bad fit to OO -- especially because SQL is
declarative.

There are many reasons to go back to the better-founded
mathematical basis that pre-dates SQL.

AntC

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-01-26 Thread Greg Weber
The starting point a new records implementation was to be pragmatic
and get something done. Simon has identified that Has constraints are
required to implement records.

However, in general, exposing an internal implementation to a user is
an idea that should give great pause. It makes it difficult to switch
implementations in the future, and there is likely a more ideal
interface for the end user - I believe that is the case for Has
constraints.

So I propose we move forward with implementing Has constraints, but do
not expose it to the user (similar to TDNR). We don't add abstraction
over fields or virtual fields or any special capabilities that would
expose the implementation. After we have a working records
implementation that meets our goal of solving name-spacing, we can
re-visit what the best interface is to present to the user for more
advanced records capabilities.

There were complaints about TDNR previously. I think we can address
them by noting that we are limiting the scope to just records and that
it is just a beginning that will not limit us.

With this in mind, what limitations are left? are updates still a
problem, and can we solve it (for now at least) by being more
demanding of type annotations?

Greg Weber

On Fri, Jan 20, 2012 at 3:22 PM, Greg Weber g...@gregweber.info wrote:
 2012/1/18 Simon Peyton-Jones simo...@microsoft.com:
 |  Has *is* a type class. It can be used and abused like any other.
 |  Record members with the same key ought to have the same semantics; the
 |  programmer must ensure this, not just call them all x or the like.
 | 
 |  Weak types these are not. The selector type is well-defined. The value
 |  type is well-defined. The record type is well-defined, but of course
 |  we define a type-class to let it be polymorphic.

 I want to mention that the issue Greg raises here is tackled under 
 Representation hiding.

 The way we currently prevent random clients of a data type from selecting 
 its foo field is by hiding the record selector foo. Similarly for its 
 data constructors. This is Haskell's way of doing data abstraction; it may 
 not be the best way, but it's Haskell's way.

 The trouble with instance declarations is that they are *always* exported.  
 No hiding.

 Under Representation hiding I suggest that

 * If the record selector foo is in scope (by any name),
  then the corresponding Has instance is in scope too
  and vice versa.

 That would match up with Haskell's hiding mechanisms precisely, albeit at 
 the cost of having an ad-hoc rule for Has instances.

 Simon


 I am not just concerned about Has instances from modules not under my control.
 I am concerned there is a new implicit rule that must be explained to
 me and that I have no control over: that every record field in the
 same module with a given label must have the exact same semantics as
 all others with the same label.

 Currently we don't have this rule. Currently we have records with the
 essentialy the same field labels, but with prefixes so they can
 coexist, and they need not have the same semantics.

 There is an extra problem with adding this rule: it is common practice
 to put types that arguably should be spread across multiple modules
 into one (module often named Types). Some form of this must be done
 out of necessity in the case of circular references between types that
 will otherwise not resolve.

 By default in Yesod, we do this out of necessity for records
 representing database tables. We call the module Models and users are
 free to write functions there. Now we have to explain to them that
 they should not.

 What use case for abstraction over record fields is too burdensome to
 use a type class for? Lacking such a compelling use case we are adding
 implicit complexity that can result in weak-typing for the unwary user
 with no compelling benefit.

 The wiki page for overloaded records suggests that users are expected
 to write code like this for virtual selectors:

    instance Has Shape area Float where
      get = area

 This is a leaky abstraction forcing me to quote function names at the
 type level. I would rather not be writing this code - explaining this
 code to someone means delving into implementation details that I don't
 understand.

 The overloaded records proposal is vying for extensibility by
 supporting abstraction over record fields with the same name. For this
 dubious use case that type classes already satisfies we lose
 user-friendly implementation of obviously useful functionality like
 virtual selectors, and also explainability.

 There is something to be said for the fact that I can quickly
 comprehend explanations of module-based record systems. I have read
 over the Overloaded Records proposal several times now and I have the
 gist of it, but I am still confused on details and need another couple
 reads to look at all the details instead of glossing over them.

 So far I have found some details on record implementations in four FP
 languages. 

Re: Records in Haskell

2012-01-26 Thread AntC
Ryan Newton rrnewton at gmail.com writes:

 
 I admit I'm a big fan of polymorphic extension.  But I don't love it enough 
for it to impede progress!  
 
 Regarding extension:  In trying to read through all this material I don't 
see a lot of love for lacks constraints a la TRex.  
 Cheers,
   -Ryan
 
Hi Ryan, I think the lacks constraint is sadly misunderstood, and in fact 
something like it will be needed eventually.

(If anybody who knows the internals of Hugs/TRex is listening, it would be 
great to get confirmation of the following.)

As you say, it relates to whatever might happen for polymorphic records, so is 
outside the scope of what SPJ is trying to address in this thread.

On the other hand, let's try to avoid developing an approach for 'Records in 
Haskell' that has to be undone when/if we get more polymorphic.

[Weird! somewhere between gmaniac and pipermail, half my post went missing: I 
was only just warming up this far. Try again ...]

From all the suggestions in the current thread, I'm expecting the approach 
will include a Has class with methods get and set. It would be sweet if in 
future the same Has class could be extended to extended(!) records, anonymous 
records, renamed labels, projections on records, merged records (as you'd get 
from a relational join), etc. Specifically:
  Has r l t = ... really must mean 
 there's exactly one field labelled l in record r, at type t
 (which is a constraint on whatever merged/extended term r is)
compare TRex's
  (r\l) = ... Rec {| l : t | r |} ...   which really means
 there's exactly one field labelled l in the Rec, at type t

In hindsight, I think it was unfortunate that the original TRex paper [1] used 
the word lacks in context of its notation for constraining the record 
structure. (I'm looking especially at section 2.1 'Basic Operations'.) In all 
the operations, the types always include a Rec term with _both_ l and r. They 
don't all have a term of a 'bare':
  Rec {| r |}
TRex is trying to avoid a positional specification of the record fields (which 
is 'implementation determined'/hidden from the programmer). But I think 
of 'bare' r as representing a record with a 'hole' at whatever position l is 
in the full record. (So the constraint (r\l) means: you're going to find a Rec 
with exactly one l in it; if you also find a Rec with 'bare' r, that means the 
same Rec, but with a 'hole'.)

The HList guys picked up the word lacks, and adapted it (successfully in 
context of what they were doing) to build 'Type Indexed Hlist's -- that is, 
record-like structures with exactly one field labelled l.

Perhaps TRex should have used a notation something like:
   (rr : l @ t) = Rec {| rr |} ...   -- HasUnique rr l t
   ... Rec {| rr \ l |} ...-- rr with a hole in place of l


You say:
  As one anecdote, I've been very pleased using Daan Leijen's scoped labels 
approach
My anecdote: the new approaches and extensions to type inference in GHCi have 
been frustratingly slow in arriving and maturing. But we now have working 
superclass constraints, including type equality (~), and some heavy-weight 
type inference. I've built a toy (and somewhat limited) polymorphic record 
system (with Has/get/set), which:
treats Data types as records; and
treats tuples as anonymous (type-indexed) records; and
implements project, extend and merge.
It relies on overlapping instances (so I don't mention it in polite company -- 
at least it doesn't use FunDeps to boot!). I achieve the effect of 'HasUnique' 
through instance resolution: if there's more than one occurence of label l in 
the record term, GHC bitches. (This is fragile: I can't use 
IncoherentInstances, and sometimes UndecidableInstances gives trouble.)


[1] A Polymorphic Type System for Extensible Records and Variants, Gaster/Mark 
P. Jones, 1996.



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-01-26 Thread AntC
Ryan Newton rrnewton at gmail.com writes:

 I admit I'm a big fan of polymorphic extension.  But I don't love it enough 
for it to impede progress!  
 
 Regarding extension:  In trying to read through all this material I don't 
see a lot of love for lacks constraints a la TRex.  
 Cheers,
   -Ryan
 
Hi Ryan, I think the lacks constraint is sadly misunderstood, and in fact 
something like it will be needed eventually.

[Weird! somewhere between gmaniac and pipermail, half my post went missing: I 
was only just warming up this far. Try again ... again ...]

(If anybody who knows the internals of Hugs/TRex is listening, it would be 
great to get confirmation of the following.)

As you say, it relates to whatever might happen for polymorphic records, so is 
outside the scope of what SPJ is trying to address in this thread.

On the other hand, let's try to avoid developing an approach for 'Records in 
Haskell' that has to be undone when/if we get more polymorphic.

From all the suggestions in the current thread, I'm expecting the approach 
will include a Has class with methods get and set. It would be sweet if in 
future the same Has class could be extended to extended(!) records, anonymous 
records, renamed labels, projections on records, merged records (as you'd get 
from a relational join), etc. Specifically:
  Has r l t = ... really must mean 
 there's exactly one field labelled l in record r, at type t
 (which is a constraint on whatever merged/extended term r is)
compare TRex's
  (r\l) = ... Rec {| l : t | r |} ...   which really means
 there's exactly one field labelled l in the Rec, at type t

In hindsight, I think it was unfortunate that the original TRex paper [1] used 
the word lacks in context of its notation for constraining the record 
structure. (I'm looking especially at section 2.1 'Basic Operations'.) In all 
the operations, the types always include a Rec term with _both_ l and r. They 
don't all have a term of a 'bare':
  Rec {| r |}
TRex is trying to avoid a positional specification of the record fields (which 
is 'implementation determined'/hidden from the programmer). But I think 
of 'bare' r as representing a record with a 'hole' at whatever position l is 
in the full record. (So the constraint (r\l) means: you're going to find a Rec 
with exactly one l in it; if you also find a Rec with 'bare' r, that means the 
same Rec, but with a 'hole'.)

The HList guys picked up the word lacks, and adapted it (successfully in 
context of what they were doing) to build 'Type Indexed Hlist's -- that is, 
record-like structures with exactly one field labelled l.

Perhaps TRex should have used a notation something like:
   (rr : l @ t) = Rec {| rr |} ...   -- HasUnique rr l t
   ... Rec {| rr \ l |} ...-- rr with a hole in place of l


You say:
  As one anecdote, I've been very pleased using Daan Leijen's scoped labels 
approach
My anecdote: the new approaches and extensions to type inference in GHCi have 
been frustratingly slow in arriving and maturing. But we now have working 
superclass constraints, including type equality (~), and some heavy-weight 
type inference. I've built a toy (and somewhat limited) polymorphic record 
system (with Has/get/set), which:
treats Data types as records; and
treats tuples as anonymous (type-indexed) records; and
implements project, extend and merge.
It relies on overlapping instances (so I don't mention it in polite company -- 
at least it doesn't use FunDeps to boot!). I achieve the effect of 'HasUnique' 
through instance resolution: if there's more than one occurence of label l in 
the record term, GHC bitches. (This is fragile: I can't use 
IncoherentInstances, and sometimes UndecidableInstances gives trouble.)


[1] A Polymorphic Type System for Extensible Records and Variants, Gaster/Mark 
P. Jones, 1996.



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-01-26 Thread Anthony Clayden
Ryan Newton rrnewton at gmail.com writes:

 
 I admit I'm a big fan of polymorphic extension.  But I
don't love it enough 
for it to impede progress!  
 
 Regarding extension:  In trying to read through all this
material I don't 
see a lot of love for lacks constraints a la TRex.  
 Cheers,
   -Ryan
 
Hi Ryan, I think the lacks constraint is sadly
misunderstood, and in fact 
something like it will be needed eventually.

[A thousand apologies for the multiposts! somewhere between
gmaniac and pipermail,
 half my posts are going missing: I was only just warming up
this far. Try again ... again ... third time]


(If anybody who knows the internals of Hugs/TRex is
listening, it would be 
great to get confirmation of the following.)

As you say, it relates to whatever might happen for
polymorphic records, so is 
outside the scope of what SPJ is trying to address in this
thread.

On the other hand, let's try to avoid developing an approach
for 'Records in 
Haskell' that has to be undone when/if we get more
polymorphic.

From all the suggestions in the current thread, I'm
expecting the approach 
will include a Has class with methods get and set. It would
be sweet if in 
future the same Has class could be extended to extended(!)
records, anonymous 
records, renamed labels, projections on records, merged
records (as you'd get 
from a relational join), etc. Specifically:
  Has r l t = ... really must mean 
 there's exactly one field labelled l in record r,
at type t
 (which is a constraint on whatever merged/extended
term r is)
compare TRex's
  (r\l) = ... Rec {| l : t | r |} ...   which really
means
 there's exactly one field labelled l in the Rec, at
type t

In hindsight, I think it was unfortunate that the original
TRex paper [1] used 
the word lacks in context of its notation for constraining
the record 
structure. (I'm looking especially at section 2.1 'Basic
Operations'.) In all 
the operations, the types always include a Rec term with
_both_ l and r. They 
don't all have a term of a 'bare':
  Rec {| r |}
TRex is trying to avoid a positional specification of the
record fields (which 
is 'implementation determined'/hidden from the programmer).
But I think of 'bare' r as representing a record with a
'hole' at
whatever position l is in the full record. (So the
constraint (r\l) means: you're
going to find a Rec with exactly one l in it; if you also
find a Rec with 'bare'
r, that means the same Rec, but with a 'hole'.)

The HList guys picked up the word lacks, and adapted it
(successfully in 
context of what they were doing) to build 'Type Indexed
Hlist's -- that is, 
record-like structures with exactly one field labelled l.

Perhaps TRex should have used a notation something like:
   (rr : l @ t) = Rec {| rr |} ...   -- HasUnique
rr l t
   ... Rec {| rr \ l |} ...-- rr with a
hole in place of l


You say:
  As one anecdote, I've been very pleased using Daan
Leijen's scoped labels 
approach
My anecdote: the new approaches and extensions to type
inference in GHCi have 
been frustratingly slow in arriving and maturing. But we now
have working 
superclass constraints, including type equality (~), and
some heavy-weight 
type inference. I've built a toy (and somewhat limited)
polymorphic record 
system (with Has/get/set), which:
treats Data types as records; and
treats tuples as anonymous (type-indexed) records; and
implements project, extend and merge.
It relies on overlapping instances (so I don't mention it in
polite company -- 
at least it doesn't use FunDeps to boot!). I achieve the
effect of 'HasUnique' 
through instance resolution: if there's more than one
occurence of label l in 
the record term, GHC bitches. (This is fragile: I can't use
IncoherentInstances, and sometimes UndecidableInstances
gives trouble.)


[1] A Polymorphic Type System for Extensible Records and
Variants, Gaster/Mark 
P. Jones, 1996.





___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-01-26 Thread AntC
Ryan Newton rrnewton at gmail.com writes:

 
 I admit I'm a big fan of polymorphic extension.  But I don't love it enough 
for it to impede progress!  
 
 Regarding extension:  In trying to read through all this material I don't 
see a lot of love for lacks constraints a la TRex.
 Cheers,
   -Ryan

Hi Ryan, I think the lacks constraint is sadly misunderstood, and in fact 
something like it will be needed eventually.

[A further thousand apologies for the multi-multiposts! somewhere in pipermail
 half my posts are going missing: I was only just warming up this far. Try 
again ... again ... third time ... last time]

(If anybody who knows the internals of Hugs/TRex is listening, it would be 
great to get confirmation of the following.)

As you say, it relates to whatever might happen for polymorphic records, so is 
outside the scope of what SPJ is trying to address in this thread.

On the other hand, let's try to avoid developing an approach for 'Records in 
Haskell' that has to be undone when/if we get more polymorphic.

All the suggestions in the current thread lead to an approach with a Has class 
with methods get and set. It would be sweet if in future the same Has class 
could be extended to extended(!) records, anonymous records, renamed labels, 
projections on records, merged records (as you'd get from a relational join), 
etc. Specifically:
  Has r l t = ... really must mean 
 there's exactly one field labelled l in record r, at type t
 (which is a constraint on whatever merged/extended term r is)
compare TRex's
  (r\l) = ... Rec {| l : t | r |} ...   which really means
 there's exactly one field labelled l in the Rec, at type t

In hindsight, I think it was unfortunate that the original TRex paper [1] used 
the word lacks in context of its notation for constraining the record 
structure. (I'm looking especially at section 2.1 'Basic Operations'.) In all 
the operations, the types always include a Rec term with _both_ l and r. They 
don't all have a term of a 'bare':
  Rec {| r |}
TRex is trying to avoid a positional specification of the record fields (which 
is 'implementation determined'/hidden from the programmer). But I think 
of 'bare' r as representing a record with a 'hole' at whatever position l is 
in the full record. (So the constraint (r\l) means: you're going to find a Rec 
with exactly one l in it; if you also find a Rec with 'bare' r, that means the 
same Rec, but with a 'hole'.)

The HList guys picked up the word lacks, and adapted it (successfully in 
context of what they were doing) to build 'Type Indexed Hlist's -- that is, 
record-like structures with exactly one field labelled l.

Perhaps TRex should have used a notation something like:
   (rr : l @ t) = Rec {| rr |} ...   -- HasUnique rr l t
   ... Rec {| rr \ l |} ...-- rr with a hole in place of l


You say:
  As one anecdote, I've been very pleased using Daan Leijen's scoped labels 
approach.
My anecdote: the new approaches and extensions to type inference in GHCi have 
been frustratingly slow in arriving and maturing. But we now have working 
superclass constraints, including type equality (~), and some heavy-weight 
type inference. I've built a toy (and somewhat limited) polymorphic record 
system (with Has/get/set), which:
treats Data types as records; and
treats tuples as anonymous (type-indexed) records; and
implements project, extend and merge.
It relies on overlapping instances (so I don't mention it in polite company -- 
at least it doesn't use FunDeps to boot!). I achieve the effect of 'HasUnique' 
through instance resolution: if there's more than one occurence of label l in 
the record term, GHC bitches. (This is fragile: I can't use 
IncoherentInstances, and sometimes UndecidableInstances gives trouble.)


[1] A Polymorphic Type System for Extensible Records and Variants, Gaster/Mark 
P. Jones, 1996.
 





___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-01-20 Thread Greg Weber
Thank you Nils for those excellent links.

Yitz would like Agda's module/record setup which allows for multiple
modules in a single file.
Records are yet another local module.
The neat thing is, that like any module you can open it, or put it
into the scope of your current module for convenient use that avoids
the needs for qualifiers (see Record opening example).
Johan, I think Agda's use of records might also be what you are getting at.

On Tue, Jan 17, 2012 at 7:10 AM, Nils Anders Danielsson n...@chalmers.se 
wrote:
 On 2012-01-16 19:16, Yitzchak Gale wrote:

 Allow nested modules. [...]


 Perhaps Agda's module/record system can provide some inspiration:

  http://wiki.portal.chalmers.se/agda/pmwiki.php?n=ReferenceManual.Modules
  http://wiki.portal.chalmers.se/agda/pmwiki.php?n=ReferenceManual.Records

 (I don't think the wiki pages above are entirely complete/up to date,
 but for the purposes of this discussion they should do.)

 --
 /NAD


 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-01-20 Thread Gábor Lehel
2012/1/18 Simon Peyton-Jones simo...@microsoft.com:
 |  Has *is* a type class. It can be used and abused like any other.
 |  Record members with the same key ought to have the same semantics; the
 |  programmer must ensure this, not just call them all x or the like.
 | 
 |  Weak types these are not. The selector type is well-defined. The value
 |  type is well-defined. The record type is well-defined, but of course
 |  we define a type-class to let it be polymorphic.

 I want to mention that the issue Greg raises here is tackled under 
 Representation hiding.

 The way we currently prevent random clients of a data type from selecting its 
 foo field is by hiding the record selector foo. Similarly for its data 
 constructors. This is Haskell's way of doing data abstraction; it may not be 
 the best way, but it's Haskell's way.

 The trouble with instance declarations is that they are *always* exported.  
 No hiding.

 Under Representation hiding I suggest that

 * If the record selector foo is in scope (by any name),
  then the corresponding Has instance is in scope too
  and vice versa.

 That would match up with Haskell's hiding mechanisms precisely, albeit at the 
 cost of having an ad-hoc rule for Has instances.

 Simon


I think these are separate issues. To use the analogy with functions
again, imagine we could do this:

module A where
data Foo = Foo

foo :: Foo - Int
foo Foo = 9

module B where
bar :: HasFunction a foo (a - Int) = a - Int
bar a = foo a

module Main where
import A
import B
main = print $ bar Foo

Would we like it?

The problem isn't that you can access unexported functions (though
that would also be a problem), the problem is that you're overloading
functions based on only their name and type, and the foo you find
might have a different meaning from the one you expected. With type
classes, we have a mechanism to ensure that code at point A and code
at point B are using the same assumptions. If you declare an instance
and it doesn't match the assumptions set out for the class, it's a
programmer error. But you can't realistically say don't declare a
function with this name and type unless you mean the same thing by it
that I mean by it. If we want similar safety for record fields, we
could use a similar mechanism. (Again, without addressing the question
of what we want, because I don't know.)

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-01-20 Thread Greg Weber
2012/1/18 Simon Peyton-Jones simo...@microsoft.com:
 |  Has *is* a type class. It can be used and abused like any other.
 |  Record members with the same key ought to have the same semantics; the
 |  programmer must ensure this, not just call them all x or the like.
 | 
 |  Weak types these are not. The selector type is well-defined. The value
 |  type is well-defined. The record type is well-defined, but of course
 |  we define a type-class to let it be polymorphic.

 I want to mention that the issue Greg raises here is tackled under 
 Representation hiding.

 The way we currently prevent random clients of a data type from selecting its 
 foo field is by hiding the record selector foo. Similarly for its data 
 constructors. This is Haskell's way of doing data abstraction; it may not be 
 the best way, but it's Haskell's way.

 The trouble with instance declarations is that they are *always* exported.  
 No hiding.

 Under Representation hiding I suggest that

 * If the record selector foo is in scope (by any name),
  then the corresponding Has instance is in scope too
  and vice versa.

 That would match up with Haskell's hiding mechanisms precisely, albeit at the 
 cost of having an ad-hoc rule for Has instances.

 Simon


I am not just concerned about Has instances from modules not under my control.
I am concerned there is a new implicit rule that must be explained to
me and that I have no control over: that every record field in the
same module with a given label must have the exact same semantics as
all others with the same label.

Currently we don't have this rule. Currently we have records with the
essentialy the same field labels, but with prefixes so they can
coexist, and they need not have the same semantics.

There is an extra problem with adding this rule: it is common practice
to put types that arguably should be spread across multiple modules
into one (module often named Types). Some form of this must be done
out of necessity in the case of circular references between types that
will otherwise not resolve.

By default in Yesod, we do this out of necessity for records
representing database tables. We call the module Models and users are
free to write functions there. Now we have to explain to them that
they should not.

What use case for abstraction over record fields is too burdensome to
use a type class for? Lacking such a compelling use case we are adding
implicit complexity that can result in weak-typing for the unwary user
with no compelling benefit.

The wiki page for overloaded records suggests that users are expected
to write code like this for virtual selectors:

instance Has Shape area Float where
  get = area

This is a leaky abstraction forcing me to quote function names at the
type level. I would rather not be writing this code - explaining this
code to someone means delving into implementation details that I don't
understand.

The overloaded records proposal is vying for extensibility by
supporting abstraction over record fields with the same name. For this
dubious use case that type classes already satisfies we lose
user-friendly implementation of obviously useful functionality like
virtual selectors, and also explainability.

There is something to be said for the fact that I can quickly
comprehend explanations of module-based record systems. I have read
over the Overloaded Records proposal several times now and I have the
gist of it, but I am still confused on details and need another couple
reads to look at all the details instead of glossing over them.

So far I have found some details on record implementations in four FP
languages. Every single one implements a module-like namespace for
records, one after abandoning the abstraction over fields approach.
There are differing good approaches to convenient access - I think
that is where it is appropriate for Haskell to attempt to take a
different approach.

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Posting etiquette, was Re: Records in Haskell

2012-01-19 Thread Malcolm Wallace
Sorry to pick on your post in particular Matthew, but I have been seeing a lot 
of this on the Haskell lists lately.

I find it completely unreasonable for a reply to a very long post to quote the 
entire text, only to add a single line at the bottom (or worse, embedded in the 
middle somewhere).  In this case, there are 7 pages of quotation before your 
one-sentence contribution.  (That is on my laptop.  I dread to think how many 
pages it represents on a smartphone screen...)  Usually, if I need to scroll 
even to the second page-worth of quotation and have still not found any new 
text, I now just delete the post without reading it.

It is a failure to communicate well, on the part of the writer who values their 
own time more highly than that of their intended readers.  Even the 
much-maligned top-posting style, as forced upon Outlook users (and as I am 
doing right here), is preferable to the failure to trim, or to get to the point 
quickly.  My inbox has 1600 unread messages in it, and life is just too short. 
 So I offer this plea as a constructive social suggestion - if you want your 
ideas to reach their intended audience, don't annoy them before they have even 
seen what you want to say.

Regards,
Malcolm


On 15 Jan 2012, at 20:33, Matthew Farkas-Dyck wrote:

 On 13/01/2012, Simon Peyton-Jones simo...@microsoft.com wrote:
 Thanks to Greg for leading the records debate.  I apologise that I
 don't have enough bandwidth to make more than an occasional
 contribution.  Greg's new wiki page, and the discussion so far has
 clarified my thinking, and this message tries to express that new
 clarity.  I put a conclusion at the end.
 
 Simon
 
 Overview
 
 It has become clear that there are two elements to pretty much all the
 proposals we have on the table.  Suppose we have two types, 'S' and 'T',
 both with a field 'f', and you want to select field 'f' from a record 'r'.
 Somehow you have to disambiguate which 'f' you mean.
 
 (Plan A) Disambiguate using qualified names.  To select field f, say
(S.f r) or (T.f r) respectively.
 
 (Plan B) Disambiguate using types. This approach usually implies
 dot-notation.
 If  (r::S), then (r.f) uses the 'f' from 'S', and similarly if
 (r::T).
 
 Note that
 
 * The Frege-derived records proposal (FDR), uses both (A) and (B)
  http://hackage.haskell.org/trac/ghc/wiki/Records/NameSpacing
 
 * The Simple Overloaded Record Fields (SORF) proposal uses only (B)
  http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields
 
 * The Type Directed Name Resolution proposal (TDNR) uses only (B)
 
 http://hackage.haskell.org/trac/haskell-prime/wiki/TypeDirectedNameResolution
 
 I know of no proposal that advocates only (A).  It seems that we are agreed
 that we must make use of types to disambigute common cases.
 
 Complexities of (Plan B)
 
 Proposal (Plan B) sounds innocent enough.  But I promise you, it isn't.
 There has ben some mention of the left-to-right bias of Frege type
 inference engine; indeed the wohle explanation of which programs are
 accepted and which are rejected, inherently involves an understanding
 of the type inference algorithm.  This is a Very Bad Thing when the
 type inference algorithm gets complicated, and GHC's is certainly
 complicated.
 
 Here's an example:
 
   type family F a b
   data instance F Int [a] = Mk { f :: Int }
 
   g :: F Int b  - ()
   h :: F a [Bool] - ()
 
   k x = (g x, x.f, h x)
 
 Consider type inference on k.  Initially we know nothing about the
 type of x.
 * From the application (g x) we learn that x's type has
   shape (F Int something).
 * From the application (h x) we learn that x's type has
   shape (F something else [Bool])
 * Hence x's type must be (F Int [Bool])
 * And hence, using the data family we can see which field
   f is intended.
 
 Notice that
 a) Neither left to right nor right to left would suffice
 b) There is significant interaction with type/data families
(and I can give you more examples with classes and GADTs)
 c) In passing we note that it is totally unclear how (Plan A)
would deal with data families
 
 This looks like a swamp.  In a simple Hindley-Milner typed language
 you might get away with some informal heuristics, but Haskell is far
 too complicated.
 
 Fortunately we know exactly what to do; it is described in some detail
 in our paper Modular type inference with local assumptions
 http://www.haskell.org/haskellwiki/Simonpj/Talk:OutsideIn
 
 The trick is to *defer* all these decisions by generating *type constraints*
 and solving them later.  We express it like this:
 
   G, r:t1  |-  r.f : t2,  (Has t1 f t2)
 
 This says that if r is in scope with type t1, then (r.f) has type t2,
 plus the constraint (Has t1 f t2), which we read as saying
 
   Type t1 must have a field f of type t2
 
 We gather up all the constraints and solve them.  In solving them
 we may figure out t1 from some *other* constraint (to the left or
 right, 

  1   2   3   >