Re: [Haskell-cafe] ordNub

2013-10-13 Thread AntC
 Niklas Hambüchen mail at nh2.me writes:
 
 In sets, the order does not matter, while for nub it does.
 

Let's be careful here!. Niklas, when you say order, do you mean:
* the _ordering_ from the Ord instance? Or
* the relative sequence of elements in the list?

 ... the fact that Set is used inside my proposed 
 ordNub implementation is a detail not visible to the caller.

If you use the Set library, that fact may be very visible!
Because Set re-sequences the whole list, as per its Ord instance.

But List.nub preserves the list sequence (except for omitting duplicates).

Furthermore, the Ord instance might compare two elements as EQ, even 
though their Eq instance says they're not equal.

So a Set-based ordNub could end up returning:
* not the same elements as List.nub
* and/or not in the same list sequence

I'd call that very much *visible* to the caller.

 
 That's why it looks like a Data.List function to me.
 

[BTW I am still less than convinced that overall a Set-based ordNub is 
significantly more efficient. I suspect it depends on how big is your 
list.]


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ordNub

2013-10-13 Thread AntC
 Niklas Hambüchen mail at nh2.me writes:
 
  On 13/10/13 21:42, AntC wrote:
  ...
  If you use the Set library, that fact may be very visible!
  Because Set re-sequences the whole list, as per its Ord instance.
  
  But List.nub preserves the list sequence
  (except for omitting duplicates).
 
 I mean *exactly* what you say here.
 
 ordNub behaves has the same behaviour as nub, while (Set.toList .
 Set.fromList) doesn't.
 

That's great, thank you.

  [BTW I am still less than convinced that overall a Set-based ordNub is 
  significantly more efficient. I suspect it depends on how big is your 
  list.]
 
 What do you mean?
 
 ordNub is clearly in a different complexity class, ...

Yes, I'm not disputing that.

 ... and the benchmarks that I provided show not only this,
 but also that ordNub is *always* faster than nub,
 even for singleton lists.

Thanks Niklas, I hadn't spotted those benchmarks back in July.

I'm surprised at that result for singletons 
(and for very small numbers of elements which are in fact each different).

Especially because List's `nub` uses `filter` == fold, which should be 
tail-recursive.

It seems to me that for small numbers, the Set-based approach still 
requires comparing each element to each other. Plus there's the overhead 
for building the Set and inserting each element into it -- where `insert` 
again walks the Set to find the insertion point.

Then here's a further possible optimisation, instead of making separate 
calls to `member` and `insert`:
* Make a single call to
insert' :: (Ord a) = a - Set a - (Bool, Set a)
* The Bool returns True if already a member.
* Else returns an updated Set in the snd, with the element inserted.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ordNub

2013-10-13 Thread AntC
 Niklas Hambüchen mail at nh2.me writes:
 
  On 14/10/13 03:20, AntC wrote:
  ... 
  Then here's a further possible optimisation, instead of making 
  separate calls to `member` and `insert`:
 
 This I understand again. Where do you get insert' from? containers
 doesn't seem to have it. Do you suggest adding it?
 

err, well I didn't have any specific library in mind.

More there's a kind of 'folk idiom' for managing data structures,
(this applies more for imperative code/update-in-situ than functional)
that if you know the next thing you're going to do after failing to find 
an element is insert it, you might as well get on with the insert there 
and then.

(It's a higher-level analogue of a machine instruction decrement-and-
branch-if-zero.)

I'm looking at all the remarks about managing libraries and dependencies.
Would it make sense to build a stand-alone version of Set purely to 
support ordNub? Then it needs only methods `empty` and `insertIfAbsent`.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Using lenses

2013-10-03 Thread AntC
  Lenses for nested ... types ...
 

Hi Simon/Edward/all,

The most compelling uses I've seen for lenses is back to Benjamin Pierce's 
[et al] papers on Updatable Views. I think this is where the 'theory' 
started(?), although similar ideas had kicked around the relational 
database world for some time.

So, a trivial example of that would be treating co-ordinates as 
simultaneously polar and cartesian. This connects to Wadlers paper on 
Views, which became Haskell's view patterns.

I guess, though, that in a short talk, Simon won't have time to dig into 
all that history.

I see that many of the suggestions in response to Simon have talked about 
deeply nested structures. (Including for JSON and XML so-called databases.)

Now certainly there are applications for nested data, where the topic 
involves hierarchies. I can see that for AST's and (E)DSL's, organic 
molecules, exploring a game tree, ... nesting is the natural structure.

Given that you have a nested structure, lenses are a really neat approach. 
I'm going to offer a contrary view: lenses are a solution to a problem 
that never should have happened. A problem I'd characterise 
as 'Unnecessary Complexity' [per Fred Brooks].

The data processing industry abandon hierarchical databases in the '80's, 
because the relational model is so superior. In ten years time, I expect 
that XML-socalled databases will be regarded as a similar aberration.

One of the reasons is redundancy: XML so-called databases repeat field 
content all over the place. And that's going to give update anomalies: 
change some of those places, but fail to change all of them.

Now formally, hierarchical and relational data models can be made 
isomorphic. So I'm not criticising the ability to capture data. I am 
criticising the ease of access (for fetch and update). You end up with 
your code for the 'business logic' getting muddled in with the code for 
navigating the hierarchy, making both very brittle to maintain. Given that 
nested data gives you (especially) an update headache, lenses help you.

But a better solution is to do the data analysis up front, apply standard 
normalisation techniques, and deal with 'flat' data models.

And for flat data models, I don't see lenses having any advantages over 
old-fashioned records. (But! We do need to solve the Field names problem. 
It's great that Adam's GSOC work has incorporated lenses.)

AntC

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] type constructor section for (- Bool), _not_ ((-) Bool)

2013-09-04 Thread AntC
 Brent Yorgey byorgey at seas.upenn.edu writes:
 
  On Tue, Sep 03, 2013 at 11:33:46AM +, AntC wrote:
  
  I want an instance and type improvement constraint of the form
  
  instance (f ~ (- Bool)) = C Foo (f b) where ...
  
 
 There is no operator section syntax for types.  Moreover, this is not
 just an issue of syntax: it is impossible to partially apply a type
 constructor to anything other than its first argument, because there
 is no type-level lambda.
 

Thank you Brent, so I'm not being entirely dumb ;-).

  
  data FlipFun b  -- abstract
  instance (f ~ FlipFun) = C Foo (f b) where ...
  
  And a type function inside the class to generate the type.
  But then I'd have to apply the type function for all instances,
  and in most places it'd be id.
 
 That's the only way to do it that I know of.
 

OK, I've got that working. My class is a helper doing type discrimination. 
So the constraints have 'infected' the caller and the caller's caller, 
etc ...

Elegant it ain't.

AntC


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] type constructor section for (- Bool), _not_ ((-) Bool)

2013-09-03 Thread AntC
I'm probably being dumb, but Hoogle nor the wiki are helping me.

I want an instance and type improvement constraint of the form

instance (f ~ (- Bool)) = C Foo (f b) where ...

The first arg to C is driving type improvement, for example:

instance (f ~ []) = C Bar (f b) where ...

(The real instances are more complex, and involve overlap, of course.)

I'm trying to write a section to get the improved type (b - Bool), 
but (- Bool) or ((- Bool) b) is invalid syntax.

This is valid, but wrong: ((-) Bool) b  -- gives (Bool - b).

I could do:

data FlipFun b  -- abstract
instance (f ~ FlipFun) = C Foo (f b) where ...

And a type function inside the class to generate the type.
But then I'd have to apply the type function for all instances,
and in most places it'd be id.

??


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Retrieving Haddock comments with haskell-src-exts

2013-08-21 Thread AntC
 Niklas Broberg niklas.broberg at gmail.com writes:
 
 Hmm. I see the difficulty here, ...

 
 On Wed, Aug 14, 2013 at 8:57 PM, Mateusz Kowalczyk wrote:
 ...
 The main problem with this approach is that we get comments (and their
 SrcLoc) as a separate list.

Hi Niklas, Mateusz,

It seems that haskell-src-exts has done quite a lot of the hard work 
(impressive!). 

Function parseFileWithComments returns the AST annotated with SrcLoc for 
each major node, and with a separate list of comments annotated with 
SrcSpan. So it should be possible to reconstitute the source and 
intersperse the comments between the nodes(?) -- or detect where a comment 
spans code.

(I'm also looking at the discussion about comments on Niklas's 
announcement of 1.14.0 )

Could there be a style of prettyPrint that carries the list of comments 
(as a continuation) alongside walking the AST, and consumes/outputs each 
comment where the comment's Span falls between the nodes' Loc? Would this 
need too much lookahead?

(And thank you to Adam for introducing me to the joys of source-munging. 
http://www.haskell.org/pipermail/haskell-cafe/2013-August/108426.html .)

AntC


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] One-element tuple

2013-08-20 Thread AntC
 adam vogt vogt.adam at gmail.com writes:
 
 This preprocessor I just threw together doesn't seem to suffers from
 those issues http://lpaste.net/91967. This kind of approach probably
 might let you steal T(..) while still allowing `T (..)' to refer to
 whatever is the original, though I think that would require working
 with the messier Annotated syntax tree.
 

wow! Adam, thank you. 

Even copes with multiple nested parens nested parens

instance C ((a, b))  c  ...

== instance C ((a, b)) (OneT.OneTuple (OneT.OneTuple c)) ...


AntC



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] One-element tuple

2013-08-19 Thread AntC
 Brent Yorgey byorgey at seas.upenn.edu writes:
  
  data Oneple a = Oneple a   -- (or newtype)
  (Oneple $ CustId 47)   -- too verbose
  
 
 This is what the OneTuple package is for:
 

Thank you Brent, and Ivan made the same suggestion.

Apart from being more verbose (2 extra chars) than the approach I disliked 
as being too verbose, does OneTuple have any merit?

 Dan Burton danburton.email at gmail.com 
 Fri Aug 16 03:04:14 UTC 2013 
claims that 
 T(CustId 47) is just one character off from what you actually want ...

(Bare T is very likely to be used already.)

But not only do I want to construct Oneples, I also want to pattern match 
and discriminate on their type in instances:

f (T(CustId x)) = ...

instance C (Oneple (CustId Int)) ...


I'm sensing there must be a need (since OneTuple is claimed to be a 
solution for it).

Would double-parens be too wild an idea?:

... ((CustId 47)) `extend` (CustName Fred, Gender Male)

f ((CustId x)) = ...

instance C ((CustId Int)) ...

We'd have to avoid the double parens as in:

((meth obj) (double x))





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ordNub

2013-08-19 Thread AntC
 Richard A. O'Keefe ok at cs.otago.ac.nz writes:
 
 There are at least four different things that an Ord version might
 mean:
 
  - first sort a list, then eliminate duplicates
  - sort a list eliminating duplicates stably as you go
(think 'merge sort', using 'union' instead of 'merge')
  - build a balanced tree set as you go
  - having a list that is already sorted, use that to
eliminated duplicates cheaply.
 
 These things have different costs.  For example, ...
 
 What I want is more often ordNubBy than ordNub, though.
 

(ordNubBy you can get via a suitable Ord instance for the element type?)

The bigger problem is that you might not have a suitable Ord instance. 
After all, sets are defined by equality/equivalence relation, not 
necessarily by Ord.

There are many other things you might want to do with a set/collection 
than just remove duplicates.

I notice that Data.Set.map = fromList . (map stuff) . toList
That is, build two lists (to be GC'd), as well as the set result. 

So does the performane cost of from/to List outrun the benefit of 
Data.Set.union? Depends how much you're mapping vs inserting and checking 
membership.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] One-element tuple

2013-08-19 Thread AntC
 Daniel F difrumin at gmail.com writes:
 
 Can you please elaborate why this inconsistency is annoying and what's 
the use of OneTuple?
 Genuine question,

Hi Daniel, the main annoyance is the verbosity (of using a data type and 
constructor), and that it no longer looks like a tuple.

The inconsistency is because a one-element tuple is just as cromulent as a 
n-element, or a zero-element. (And that a one-element tuple is a distinct 
type from the element on its own/un-tupled.)

So if I have instances (as I do) like:

instance C (a, b) ...
instance C () ...

I can't usefully put either of these next two, because they're equiv to 
the third:

instance C (( a )) ...
instance C ( a )   ...
instance C a   ...   -- overlaps every instance

Similarly for patterns and expressions, the so-called superfluous parens 
are just stripped away, so equivalent to the bare term.

The use of OneTuple is that it comes with all Prelude instances pre-
declared (just like all other tuple constructors). I don't see that it has 
an advantage over declaring your own data type(?) I'd also be interested 
to know who is using it, and why.

What I'm doing is building Type-Indexed Tuples [1] mentioned in HList [2], 
as an approach to extensible records [3], on the model of Trex [4] -- all 
of which acknowledge one-element records/rows/tuples. And then I'm using 
the tuples as a platform for relational algebra [5] with natural Join (and 
ideas from Tropashko's 'Relational Lattice' [6]). 

Is there anybody using OneTuple 'in anger'?

AntC

[1] M. Shields and E.Meijer. Type-indexed rows. In Proceedings
of the 28th ACM SIGPLAN-SIGACT symposium on Principles
of Programming Languages, pages 261–275. ACMPress, 2001.
[2] http://hackage.haskell.org/package/HList
[3] http://www.haskell.org/haskellwiki/Extensible_record
[4] http://web.cecs.pdx.edu/~mpj/pubs/polyrec.html
[5] http://en.wikipedia.org/wiki/Relational_algebra#Natural_join_
[6] http://vadimtropashko.wordpress.com/relational-lattice/

 
 
 
 On Fri, Aug 16, 2013 at 5:35 AM, AntC anthony_clayden at 
clear.net.nz wrote:
 There's an annoying inconsistency:
     (CustId 47, CustName Fred, Gender Male)  -- threeple
     (CustId 47, CustName Fred)                -- twople
 --  (CustId 47)-- oneple not!
 () -- nople





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] One-element tuple

2013-08-19 Thread AntC
 Mike Ledger eleventynine at gmail.com writes:
 
 It seems to me that this is Identity given a different name. A bonus of 
using Identity is that it won't introduce any new packages to the majority 
of installations.

 On 20/08/2013 1:17 PM, Ivan Lazar Miljenovic wrote:
 ...
  isn't a single element tuple isomorphic to just that element ...?


Hi Mike, and Ivan, I'm not sure you're 'getting' it.

A one-element tuple distinguishes at type level between a container vs 
contents. It's not identity/not isomorphic. Try those instances I gave.
(What's worse, `Identity` is no fewer chars than `OneTuple` ;-)

If your application is not working with tuples/for general usage, then 
no need to introduce any new packages. You won't want OneTuple's (or 
whatever they're called).

Since I am working with tuples, I want the code to be clear where it's 
dealing with tuples vs the Haskell type infrastructure.

Thanks Ivan for the dependencies list. No surprise that Hlist is using 
OneTuple == HCons a HNil. That need is exactly what I'm talking about, 
not a joke. Lennart's `tuple` package likewise (but no use to me because 
it's using positional access, not Type-Indexed).


AntC

 
  So if I have instances (as I do) like:
 
      instance C (a, b) ...
      instance C ()     ...
 
  I can't usefully put either of these next two, because they're equiv to
  the third:
 
      instance C (( a )) ...
      instance C ( a )   ...
      instance C a       ...   -- overlaps every instance
 
  Similarly for patterns and expressions, the so-called superfluous 
parens
  are just stripped away, so equivalent to the bare term.
 



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] One-element tuple

2013-08-15 Thread AntC
There's an annoying inconsistency:

(CustId 47, CustName Fred, Gender Male)  -- threeple
(CustId 47, CustName Fred)-- twople
--  (CustId 47)-- oneple not!
() -- nople

(That is, it's annoying if you're trying to make typeclass instances for 
extensible/contractable tuples. Yes, I know I could use HLists.)

I'm not happy with either approach I've tried:

data Oneple a = Oneple a   -- (or newtype)
(Oneple $ CustId 47)   -- too verbose

type Oneple a = [a]
[CustId 47]  -- at least looks bracket-y

What do you do?

AntC



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Why are field selectors functions? [was: ghc-users A possible alternative to dot notation for record access]

2013-07-18 Thread AntC
No! This isn't more bikeshedding about notation.

It's a bit of Haskell archaeology.

 On Sun, Jun 30, 2013 at 2:59 AM, Judah Jacobson wrote:
[This isn't exactly what Judah wrote.]
 ...

 Instead of `x f` (to access field x of record f),
 maybe we could write `f{x}` as the record selection.  


The more I thought about that ...

We use { ... } to declare records, build them, update them.
We use { ... } in pattern matching to access named fields.

Why don't we use { ... } to access named fields in terms?

The syntax `e{ foo }` is unused in H98. (Or at least it was in 1998.)
Can someone who was there at the time (1994?, TRex?)
remember if that was ever considered?

In the declaration, the build/update, and pattern match:

data R = MkR { foo :: Int }
r = MkR { foo = 7 }
\ (MkFoo { foo }) - ...  -- using NamedFieldPuns

`foo` is clearly bound to an Int.

So why is there this `foo` thing that isn't an Int?

AntC


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proposal: Non-recursive let

2013-07-11 Thread AntC
 oleg at okmij.org writes:
 ...
 In Haskell I'll have to uniquely number the s's:
 
 let (x,s1)  = foo 1 [] in
 let (y,s2)  = bar x s1 in
 let (z,s3)  = baz x y s2 in ...
 
 and re-number them if I insert a new statement. 
 
 I once wrote about 50-100 lines of code with the fragment like the
 above and the only problem was my messing up the numbering (at one
 place I used s2 where I should've used s3). ...

Oleg, I hope you are not saying that in production code you use names like 
x, y, z, s1, s2, s3, s4, ...

It leads to opaque code. If even you can mess up, what hope for us with 
only nano-Oleg brain capacity?

Next you'll be wanting GOTO and destructive assignment.

Who knows: one day somebody modifying your code might need to insert a 
line. (That 'somebody' might be your future self.)

Just don't do that! Use long_and_meaningful names.

50-100 near-identical lines of code sounds like an opportunity for an 
algorithm.

AntC


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GADTs and pattern matching

2013-06-19 Thread AntC
Francesco Mazzoli f at mazzo.li writes:

 
 I have stumbled upon a strange annoyance:
 
 {-# LANGUAGE GADTs #-}

Hi Francesco, I think you'll find that the 'annoyance' is nothing to do 
with GADTs. I suggest you take the type signature off of foo1, and see 
what type ghc infers for it. It isn't :: a - Foo a - Int.

 data Foo v where
 Foo :: Foo (Maybe v)
 
 -- This doesn't work
 --    foo1 :: a - Foo a - Int
 foo1 Nothing  Foo = undefined
 foo1 (Just x) Foo = undefined
  ...
 
 The first definition fails with the error
 
 Couldn't match expected type `a' with actual type `Maybe t0'
 ...
 In the pattern: Nothing

Yep, that message explains what's going on well enough for me.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] HList records access time [was: Diving into the records swamp]

2013-04-28 Thread AntC
 Aleksandar Dimitrov aleks.dimitrov at gmail.com writes:


Hi Aleksandar, I was hoping that Oleg himself would answer the second part 
of your post, as he did the part re DataKinds:
 
 
 Here's one thing I don't like about the current way HList-based 
 extensible record are represented (and used in OOHaskell [2]):
 the access time is linear in
 the number of records a certain type has.
 Somehow just the thought of reorder the records in your
 constructors to make your program go faster makes me cringe
 a little.

Yes, it would me!

I thought that usually the instance matching for HList happens at compile 
time(?) So reordering might make compiles faster, but that should be dealt 
with before run-time(?)

I guess it matters whether the 'shape' of the HLists is knowable at 
compile time, or the records are built 'on the fly' at run time.

Here's a possibly relevant idea that I would try for myself if only my day 
job didn't get in the way of my life so much ;-(. 

Instead of Type-Indexed Products (TIP's as the HList paper calls them), 
how about Type-Indexed tuples (TIple's) like this:

instance Has (a, b)a where { get (x, _) = x; ... }
instance Has (a, b)b ...
instance Has (a, b, c) a ...
...
instance Has (a, b, c, d) a ...
...

(Note that the result type is not an argument to `get'; instead the type 
is 'pulled' out by the demanding context.)

Then access to any element is 'flat' rather than having to walk down the 
spine of an HList. (Again this needs being able to resolve instances at 
compile time.)

The instances for any n-tuple overlap, and the result of get/set is not 
confluent. This only matters if the same type appears twice in a tuple. 
(Contrast that HList uses a 'Lacks' pseudo-constraint/instance failure.)

I hope Template Haskell would help with generating the instances for all 
of the n-tuples -- otherwise it's a lot of boilerplate.

The tricky part comes with TIple-level combinations such as extend or 
append. That might be where NewAxioms overlaps come in to calculate the 
type of the result.

AntC




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Diving into the records swamp (possible GSoC project)

2013-04-27 Thread AntC
 Johan Tibell johan.tibell at gmail.com writes:
 
 Instead of endorsing one of the listed proposals directly, I will 
emphasize the problem, so we don't lose sight of it. The problem people 
run into *in practice* and complain about in blog posts, on Google+, or 
privately when we chat about Haskell over beer, is that they would like to 
write a record definition like this one:
 
     data Employee = Employee { id :: Int, name :: String }
 
     printId :: Employee - IO ()
     printId emp = print $ id emp
 
 but since that doesn't work well in Haskell today due to name
 collisions, ...

[I've a bit more to say on that record definition below.]


Thank you Johan, I agree we should keep clear sight of the problem. So 
let's be a bit more precise: it's not exactly the record declaration that 
causes the name collisions, it's the field selector function that gets 
created automatically. (Note that we can use xDisambiguateRecordFields to 
access fields to, errm, disambiguate.)

So I did put in a separate proposal [3] (and ticket) on that very narrow 
issue. (Simon M pointed out that I probably didn't name it very well!)

Even if we do nothing to advance the records swamp, PLEASE can we 
provide a compiler option to suppress that function.

I envisage it might facilitate a 'cottage industry' of Template Haskell 
solutions (generating Has instances), which would be a cheap and cheerful 
way to experiment in the design space.


[3] 
http://hackage.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFi
elds/NoMonoRecordFields
(There are bound to be some fishhooks, especially around export/import of 
names from a module with no selector functions to one that's expecting 
them.)



[cont from above]
 ... the best practice today is to instead write something like:
 
     data Employee = Employee { employeeId :: Int, employeeName :: 
String }
 
     printId :: Employee - IO ()
     printId emp = print $ employeeId emp
 
 The downsides of the latter have been discussed elsewhere, but briefly 
they are:
 
  * Overly verbose when there's no ambiguity.
  * Ad-hoc prefix is hard to predict (i.e. sometimes abbreviations of the 
data type name are used).

I don't entirely agree with your analysis.
 * fields named `id' or `name' are very likely to clash,
   so that's a bad design (_too_ generic).
 * If you've normalised your data model [**],
   you are very likely to want exactly the same field
   in several records
   (for example employeeId in EmployeeNameAddress,
and in EmployeePay and in EmployeeTimeSheet.)

[And this use case is what TP/DORF is primarily aimed at.]

[**] Do I need to explain what data model normalisation is? I fear that so-
called XML 'databases' mean academics don't get taught normalisation any 
more(?)

AntC



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Diving into the records swamp (possible GSoC project)

2013-04-27 Thread AntC
 Johan Tibell johan.tibell at gmail.com writes:
 
 The discussions about an overhauled record system also involve lots of 
talk about record sub-typing, extensible records, and other more advanced 
features. I'd like to point out that there doesn't seem to be a great 
demand for these features. ...

Sorry, Johan, I really have to disagree with that.

There's lot's of Haskell to SQL interfaces that build on HList and its 
extensible record ideas (HDBC for example).

But the usability is not good (as Petr points out, and as Oleg/Ralf 
admitted back in the paper). The type error messages are long and obscure. 



 ... They might be nice-to-haves or might fall out naturally from a 
solution to the namespacing problem above, but they are in fact not needed 
to solve the common problem people have with the Haskell record system.

the common problem people have is that the record system is unusable 
[IMO] so doesn't get 'stretched' to see what other difficulties it has. 
There are all sorts of alternative systems (including Lenses) built with 
Template Haskell (and chewing gum and gaffer tape: that's how desperately 
bad is the current situation ;-).

I'm saying that many people find the Haskell record system 'as is' so 
dysfunctional that they give up on it! I feel strongly that as soon as we 
get past the name collissions, there'll be other blockages to using it.

I'd be interested to hear if there are any who can remember the Trex 
system, and how (un)usable it was?

AntC



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Diving into the records swamp (possible GSoC project)

2013-04-26 Thread AntC
 Adam Gundry adam.gundry at strath.ac.uk writes:
 
 Hi,
 
 I am hoping to do a GSoC project this year working on GHC, and have been
 pointed in the direction of the records issue (in particular, the desire
 to overload field names).

Heck you're brave!

Are you sure you want to step into the aggravated issue of changing the 
dot operator from being function composition?
Are you going to use explicit type application? (The type of get is very 
odd.)
Are you going to handle type-changing update?

 
 The plan would be to implement a solution to the narrow issue of
 overloaded field names, along the lines of Simon PJ's SORF proposal

So has someone decided that SORF is the best of those many proposals? I 
guess it's because it comes with the SPJ ring of confidence?

Before jumping to that decision, I suggest you/your sponsors consider the 
implications of the NewAxioms stuff in GHC Head [2] to 
support 'controlled' overlap.

I think overlap is the only extra feature needed to support the DORF or 
TPDORF proposals. (Plus the syntactic sugar already outlined in that 
proposal.)


  This would provide a basis for experimentation with
 first-class record types.

No: first-class record types needs much more than the SORF proposal. In 
particular it needs a way to extend an existing record to make a new one; 
project a subset of fields; and most important to merge two records with 
some fields in common avoiding doubling-up those fields (aka Relational 
Natural Join).

The DORF/TPDORF proposals are aimed much better as a step towards first-
class record types. [IMO **]

Oleg/Ralf's HList paper covers all the ground for first-class records. It 
depends heavily on overlaps, which is why the NewAxioms stuff would work 
in really well.

AntC


[2] http://hackage.haskell.org/trac/ghc/wiki/NewAxioms

[**] Declaration of interest: I wrote the DORF and TPDORF proposals.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Sparse records/ADTs

2012-10-24 Thread AntC
Jon Fairbairn jon.fairbairn at cl.cam.ac.uk writes:

 
 
 Is there a convenient way of handling a data structure with lots
 of fields of different types that may or may not be filled in?
 

Hi Jon, if your question had appeared in a database forum, the answer would 
be ...

Sounds like you need to do normal-form analysis:
- what are the functional dependencies between the fields?
- is your data structure something like a 'universal relation'?
- is the structure better expressed as several records?
  (vertically partitioned)

Relevant treatments could be How to Handle missing information without using 
NULL  http://www.dcs.warwick.ac.uk/~hugh/TTM/Missing-info-without-nulls.pdf

I appreciate that SQL's NULL is not the same as Haskell's Maybe/Nothing (and 
SQL's semantics for NULL are utterly horrible). Nevertheless, if you're 
concerned to avoid the wasted space of sparse records, it could be a helpful 
discipline to design data structures without needing Maybe's.

AntC


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A yet another question about subtyping and heterogeneous collections

2012-10-22 Thread AntC
Dmitry Vyal akamaus at gmail.com writes:

 
 On 10/19/2012 06:14 AM, AntC wrote:
  Roman Cheplyaka roma at ro-che.info writes:
 
[snip]
  instance (Upcast a b, Upcast b c) = Upcast a c where
 upcast = (upcast :: b - a) . (upcast :: c - b)
  This is the offending instance. Remember, GHC only looks at the instance
  head (Upcast a c here) when it decides which instance to use.
 
  Roman
 
  Hi Dmitry, looks like you've got the classic (show . read) difficulty. In
  your Upcast a c instance, the compiler is trying to figure out the type 
of b.
 
  You might think there's only one 'chain' to get from (say) type A to type 
D --
  that is via Upcast A B to Upcast B C to Upcast C D; but there's also an
  instance Upcast x x -- which means there could be any number of Upcast A A,
  Upcast B B, etc links in the chain.
 
  (And this doesn't count all the other possible instances that might be 
defined
  in other modules -- for all the compiler knows at that point.)
 
  The modern way to handle this is using type functions (aka type families 
aka
  associated types), but I'm not sure how that would apply here. (And, for 
the
  record, the old-fashioned way would use functional dependencies, as per the
  Heterogenous Collections paper aka 'HList's).
 
  AntC
 
 
 Hello Antony,
 do I understand you correctly, that the error message is the result of 
 compiler using depth first search of some kind when calculating 
 instances?  Also can you please elaborate a bit more on using functional 
 dependencies for this problem? Upcast x y is not a function, it's a 
 relation, y can be upcasted to different x'es and different y's can be 
 upcasted to single x.
 
 Dmitry
 

Hi Dmitry, you've specified UndecidableInstances (which means you're 
saying trust me, I know what I'm doing). So the compiler isn't trying 
to 'calculate' instances so much as follow your logic, and the error mesage 
means that it can't follow. I'm guessing that the stack overflow is because 
it's tryng to search, and getting into a loop of Upcast x x == Upcast x x 
== ... Increasing the stack size is not likely to help.

You could try removing the Upcast x x instance to see what happens and 
understand it better. (But I can see this won't help with solving the bigger 
problem.) 

The more usual approach for heterogeneous collections (for example in HList, 
or somewhat differently in lenses) is to define a class 'Has x r' (record r 
has field x), with methods get/set. Define instances for all your 'base' 
collection types and their fields. Then define an instance for the subtype to 
inherit from the supertype.

But that does require a strict hierarchy of sub-/super-types, so your wish to 
upcast in any direction won't fit.

For your general question on functional dependencies, you'll need to read the 
wiki's. Relations and functions are isomorphic (and that's what fundeps takes 
advantage of); but it needs careful structuring of the instances to make type 
inference tractable.

HTH
AntC



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A yet another question about subtyping and heterogeneous collections

2012-10-18 Thread AntC
Roman Cheplyaka roma at ro-che.info writes:

 
 * Dmitry Vyal akamaus at gmail.com [2012-10-18 17:31:13+0400]
  On 10/18/2012 03:20 PM, MigMit wrote:
  Why do you need ALike x, BLike x etc.? Why not just Like u x?
  
  
  Hmm, looks like a nice idea. I tried it, unfortunately I can't cope
  with compiler error messages:
  
  tst.hs:32:15:
  Context reduction stack overflow; size = 201
  Use -fcontext-stack=N to increase stack size to N
Upcast a b
  In the first argument of `(.)', namely `(upcast :: b - a)'
  In the expression: (upcast :: b - a) . (upcast :: c - b)
  In the expression: (upcast :: b - a) . (upcast :: c - b) $ x
 
  instance (Upcast a b, Upcast b c) = Upcast a c where
upcast = (upcast :: b - a) . (upcast :: c - b)
 
 This is the offending instance. Remember, GHC only looks at the instance
 head (Upcast a c here) when it decides which instance to use.
 
 Roman
 
Hi Dmitry, looks like you've got the classic (show . read) difficulty. In 
your Upcast a c instance, the compiler is trying to figure out the type of b.

You might think there's only one 'chain' to get from (say) type A to type D -- 
that is via Upcast A B to Upcast B C to Upcast C D; but there's also an 
instance Upcast x x -- which means there could be any number of Upcast A A, 
Upcast B B, etc links in the chain.

(And this doesn't count all the other possible instances that might be defined 
in other modules -- for all the compiler knows at that point.)

The modern way to handle this is using type functions (aka type families aka 
associated types), but I'm not sure how that would apply here. (And, for the 
record, the old-fashioned way would use functional dependencies, as per the 
Heterogenous Collections paper aka 'HList's).

AntC




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Destructive updates to plain ADTs

2012-09-09 Thread AntC
Milan Straka fox at ucw.cz writes:

 
 is there any way to perform a destructive update on a plain ADT?

Hi Milan, perhaps this is a dumb question, but given that you're using a lazy 
functional language: why do you want to do destructive updates at all?

Perhaps you'd be better off using an imperative/object-oriented language?

If you're trying to create a static data structure, perhaps the credit card 
transform is the approach to use?

 ... I would like just to run some benchmarks and see the results.

Running benchmarks for destructive updates in Haskell seems a waste of time(?)

AntC






___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fixity declaration extension

2012-08-13 Thread AntC
Ryan Ingram ryani.spam at gmail.com writes:

 
 
 When I was implementing a toy functional languages compiler I did away with 
precedence declarations by number and instead allowed the programmer to 
specify a partial order on declarations; this seems to be a much cleaner 
solution and avoids arbitrary precedences between otherwise unrelated 
operators defined in different modules.


I agree. I don't declare operators very often, and when I do I always struggle 
to remember which way round the precedence numbers go. I usually end up 
hunting for a Prelude operator that works the way I'm aiming for, then copy 
its definition. It would be much easier to declare the fixity of myop to be 
same as someotherop (which would presumably have to be already declared/fixed 
in an imported module).

[It's also slightly counterintuitive that the thing being defined comes last 
in an infix declaration, and that the stand-alone operator isn't in parens.]

infixAs !! .$-- fixing myop (.$) to be fixed as Preludeop (!!)

If you wanted to define precedence relative to some other operator(s), it 
might be clearer to give some model parsings (grabbing some syntax something 
like Ryan's):

infix .$ (x ** y .$ z .$ w) == (x ** ((y .$  z) .$ w))
-- === infixl 9 .$


OTOH, I think Евгений's proposal is getting too exotic. Do we really need such 
fine shades of binding? Will the reader remember how each operator binds 
relative to the others? Surely a case where explicit parens would be better.

(Anything else we can bikeshed about while we're at it?)

AntC



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] CRIP: the Curiously Reoccuring Instance Pattern

2012-08-06 Thread AntC
 oleg at okmij.org writes:

 
  I think instead you should have:
  - abandoned FunDeps
  - embraced Overlapping more!
 
 Well, using TypeCast to emulate all FunDeps was demonstrated three
 years later after HList (or even sooner -- I don't remember when
 exactly the code was written):
 
 http://okmij.org/ftp/Haskell/TypeClass.html#Haskell1
 

Yes, I see the same idiom. I'll use it below in the definition of the TypeEq 
predicate.

 
  So here's my conjecture:
  1. We don't need FunDeps, except to define TypeCast.
 (And GHC now has equality constraints, which are prettier.)
  2. Without FunDeps, overlapping works fine.
  ...
 
 I agree on point 1 but not on point 2. The example of incoherence
 described in Sec `7.6.3.4. Overlapping instances' of the GHC User
 Guide has nothing to do with functional dependencies.
 

The question a couple of months ago was: do we need Type-level TypeRep? And 
you had made a case before that for the TTypeable approach. And TTypeable 
started life as 'A representation-based equality predicate', Section 9 of the 
HList paper. And the justification for it was 'Overlapping banned', Section 6.

So three questions in light of the approach of abandoning FunDeps and 
therefore not getting interference with overlapping:
A. Does TTypeable need to be so complicated?
B. Is TTypeable needed at all?
C. Does the 'simplistic' version of type equality testing suffer possible 
IncoherentInstances?

A. I conjecture that TTypeable does not need the TC_code family,
   nor the phantom type to drive it, nor the NAT encoding.
   Instead let each type stand for itself, then we just need TYPEOF:

type instance TYPEOF () = ( (), NIL )
type instance TYPEOF [a]= ( [()], (TYPEOF a) :/ NIL )
type instance TYPEOF (IO a) = ( IO (), (TYPEOF a) :/ NIL )
-- etc
-- I guess TH could generate those instances?

Then for testing type equality, we can use a simple overlapping test:

type family TypeEq a b :: Bool where
{ TypeEq a a = True
; TypeEq _ _ = False
}

   This uses the proposed NewAxiom style of type function.
   Equivalently with a class:

instance (TypeCast p TTrue)  = TypeEq a a p-- works in Hugs 2002!
instance (TypeCast p TFalse) = TypeEq a b p

   No risk of incoherent instances because the arguments have to be
   instantiated via TYPEOF.

   The approach of putting unit as dummy argument to the type constructors
   means we can also test the 'shape' of the types.

2. But (apart from testing the shape) have we gained anything here
   over testing the types directly, rather than going via TYPEOF?
   If the type isn't grounded enough to test for equality
   (as observed in Section 9 'By chance or by design?'),
   then neither is it grounded enough to instantiate for TYPEOF.

3. The incoherent instances example in Sec 7.6.3.4 is because of instances
   defined in separate modules.
   The simplistic test for type equality declares its two instances together.
   Is there some way an incoherent instance could slip through?

   If not, what is the TTypeable mechanism gaining?

And by the way, I couldn't help trying a bit of 'compiler archaeology'. I dug 
out Hugs version November 2002. My revised hDeleteMany works fine, as does 
my 'repair' to the example in 'Ended up in murky water'.

So I can't see a need for TTypeable even back in 2004.

AntC


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] CRIP: the Curiously Reoccuring Instance Pattern

2012-08-03 Thread AntC
 oleg at okmij.org writes:

 
  I think instead you should have:
  - abandoned FunDeps
  - embraced Overlapping more!
 
 Well, using TypeCast to emulate all FunDeps was demonstrated three
 years later after HList (or even sooner -- I don't remember when
 exactly the code was written):
 
 http://okmij.org/ftp/Haskell/TypeClass.html#Haskell1
 

Yikes! Thank you Oleg, more formidable code to get my head round.

 
  So here's my conjecture:
  1. We don't need FunDeps, except to define TypeCast.
 (And GHC now has equality constraints, which are prettier.)
  2. Without FunDeps, overlapping works fine.
  ...
 
 I agree on point 1 but not on point 2. The example of incoherence
 described in Sec `7.6.3.4. Overlapping instances' of the GHC User
 Guide has nothing to do with functional dependencies.
 

Well, I meant overlapping as used in HList. But fair point, and goes back to a 
much earlier discussion:
a) Don't switch on IncoherentInstances.
b) Make instance validation 'eager' and 'strict' like Hugs does,
   not 'optimistic'/'lax' like GHC.
   (That is, validate at point of declaration of the instance.)
c) Reject instances that are not explicitly dis-overlapped.

The mechanism for dis-overlap is disequality restraints. So for the multi-
module MyShow example in 7.6.3.4. reject the overlap:
 instance MyShow [T]   -- in module Main is OK, providing
 --  instance MyShow [a]   -- in module Help must be dis-overlapped to
 instance MyShow [a] | a /~ T where ...

The work-in-progress NewAxioms is aiming for something similar, but only for 
type functions. Perhaps in the longer term we use that to build helper 
functions, then banish overlapping type classes?

(I still think that explicitly dis-overlapped instances would be easier to 
understand.)

AntC



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] CRIP: the Curiously Reoccuring Instance Pattern

2012-08-02 Thread AntC
 oleg at okmij.org writes:

 
 
  I implemented hDeleteMany without FunDeps -- and it works in Hugs (using 
  TypeCast -- but looks prettier in GHC with equality constraints).
 
 It is nice that hDeleteMany works on Hugs.  I forgot if we tried it
 back in 2004.

Thanks Oleg -- but it was only 8 years ago ;-)

Yes you did try it: the HList paper gives it as evidence for We gave up on 
persuading Hugs. and Overlapping Banned [Section 6, p7]. I think you were 
premature, which is why I'm using it as an example.

I've also 'repaired' the example in Ended up in murky water, and got that 
working in Hugs.

Let me say straight away that I'm not trying to 'knock' the work you and Ralf 
put into HList. It's a formidable, and formidably impressive piece of work (as 
can be seen from the number of papers that reference it). I've built record-
like systems using it.

But the HList paper goes on to abandon overlapping, and bring in (essentially) 
the TTypeable approach for type equality. I'm saying (in the discussion a 
couple of months ago) that TTypeable is unnecessary, and that overlapping 
instances are robust -- except when they get mixed up with FunDeps.

I think instead you should have:
- abandoned FunDeps
- embraced Overlapping more!


 To be sure some HList code did work on Hugs. We even had
 a special subset of HList specifically for Hugs (which I removed from
 the distribution some time ago). The problem was that Hugs
 inexplicably would fail in some other HList code. Perhaps 2006 version
 is better in that respect, and more or all of HList would work on it.
 

Yeah, I'm not trying to 'rehabilitate' Hugs. No point in doing 'compiler 
archeology' on whether/when anything got fixed.

To come back to the CRIP Ryan has spotted:
- It got 'blessed' by SPJ in amongst the 'Records in Haskell' proposals [1]
- Certainly equality constraints make the technique easier;
- but TypeCast (in the HList paper) achieved the same thing back in 2004
  (and the HList paper discusses earlier approaches for typecast).
- So HList used the technique to get round some of the issues with overlapping.
- I'm saying you could have got round more of the issues.
- And perhaps have got the whole of HList working in Hugs.

So here's my conjecture:
1. We don't need FunDeps, except to define TypeCast.
   (And GHC now has equality constraints, which are prettier.)
2. Without FunDeps, overlapping works fine.
3. (Also we don't get into trouble with transitive chains of FunDeps.)
4. Instead of FunDeps, put TypeCasts on all of the instances.
   (Or other Class constraints to 'chain' typevars.)
5. And all of the instances must be framed with a bare typevar in the 'result'.
   (That is, a typevar distinct from any others in the head.)
6. (As you said to Ryan) we still sometimes need repeated typevars in the head,
   for instances that are (in effect) testing for equality.
   But these should only be in the 'argument' position.


AntC

[1] 
http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields#Higherr
anktypesandtypefunctions see functional-dependency-like mechanism (but using 
equalities)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] CRIP: the Curiously Reoccuring Instance Pattern

2012-07-31 Thread AntC
 oleg at okmij.org writes:

 
 [... snip]
 
 Of course instances above are overlapping. And when we add functional
 dependencies (since we really want type-functions rather type
 relations), they stop working at all. We had to employ work-arounds,
 which are described in detail in the HList paper (which is 8 years old
 already).
 
Yes, it's adding the FunDeps that puts the spanner in the works.

Oleg, did you see this, and the discussion around that time?
http://www.haskell.org/pipermail/haskell-prime/2012-May/003688.html

I implemented hDeleteMany without FunDeps -- and it works in Hugs (using 
TypeCast -- but looks prettier in GHC with equality constraints).

Essentially it's a FunDep-like mechanism without FunDeps (as SPJ calls it), to 
achieve what Ryan's talking about. But you are quite right that we still need 
overlapping instances for parts of the type-level logic.

AntC



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Using promoted lists

2012-06-07 Thread AntC
Yves Parès yves.pares at gmail.com writes:

 
 The doc page http://www.haskell.org/ghc/docs/7.4.1/html/users_guide/kind-
polymorphism-and-promotion.html#promotion show that lists are now usable as 
types.So I'm trying to make a type level function to test if a type list 
contains a type. Unless I'm wrong, that calls to the use of a type family.

{-# LANGUAGE DataKinds, TypeOperators, KindSignatures, TypeFamilies #-}
data HBool = HTrue | HFalse
  -- Mandatory as Bool type is not currently promoted to a kind

type family Member x (l :: [*]) :: HBool

type instance Member x (x ': xs) = HTrue
type instance Member x (y ': xs) = Member x xs
type instance Member x (y ': '[]) = HFalse

But the compiler complains about my instance conflicting.

Hi Yves, always when you're asking a question like this, give the error 
message in full -- usually it will explain what's wrong.

In this case I can guess: you have overlapping instances (the first overlaps 
the second, the third overlaps the second), which are not allowed for type 
functions (currently -- unless the equations are confluent).

There's some early work on introducing overlaps to type functions (in a 
controlled way). http://hackage.haskell.org/trac/ghc/wiki/NewAxioms

And as it happens, several threads are going on in the lists re options and 
implications.

 Is what I'm trying to do feasible?

Promoted lists are really just the same as HLists, but using the standard 
Haskell syntax.

A membership test is feasible with FunDeps (because they do allow overlaps), 
but I guess you know the HList stuff, judging from your HBool. It's feasible 
using type equality constraints to achieve the same as HList (so ~ is 
equivalent to HList's TypeCast), also with overlaps.


Second question: how can type level tuples (also mentioned in the doc page) 
be exploited? Aren't they redundant with type-level lists?

Type-level tuples are fixed length, and provide a flat structure (any element 
is equally accessible), whereas lists are expandable, with a nested structure 
that means you have to scan down the structure to get to the element you want.

AntC


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fundeps and overlapping instances

2012-06-04 Thread AntC
Simon Peyton-Jones simonpj at microsoft.com writes:

 
 |  My take is that we should abandon Fundeps, and concentrate on
 |  introducing overlaps into type functions in a controlled way (what
 |  I've called 'dis- overlapped overlaps'.)
 | 
 | Abandoning fundeps would be a sad day for type-level programming.
 | There are many things other than overlaps that you can do with fundeps
 | and constraint kinds that you cannot currently do with type families,
 | such as:
 | 
 | - Partial application or higher-order programming.
 | - Short-circuit evaluation, lazy evaluation or type-level case.
 
 Etienne, I think it would be a good service to make Haskell wiki page 
describing the difference between
 fundeps and type families, and in particular describing things that can be 
done with the former but not the latter.
 
 
 Simon
 
+1 That would be great. I'll link to Etienne's wiki from the Discussion page 
I've started for NewAxioms http://hackage.haskell.org/trac/ghc/wiki/NewAxioms

In particular, it would be good to tease out where we're getting interference 
or inter-dependence between different type-level extensions:
Overlaps
Fundeps
UndecidableInstances (that is, breaking the coverage conditions)
ScopedTypeVariables

Given that that the Fundeps idea is taken from relational theory, and 
relations is just another way of representing functions, there ought to be 
close correspondence to type-level functions.

To me, NewAxioms is aiming at type-level case, to provide a different style of 
defining type functions, as well as dis-overlapping overlaps.

AntC


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fundeps and overlapping instances

2012-05-31 Thread AntC
Iavor Diatchki iavor.diatchki at gmail.com writes:

 
 Hello,
 
 the notion of a functional dependency is well established, and it was used 
well before it was introduced to Haskell (for example, take a look 
at http://en.wikipedia.org/wiki/Functional_dependency).  So I'd be weary to 
redefine it lightly.

Yes functional dependency is well established in relational algebra (set 
theory actually) -- it's about values in attributes. But there's nothing 
corresponding to typevars (I suppose you might call those patterns of values); 
there's nothing like overlaps.

Perhaps instances with Fundeps should only use H98-style arguments? Perhaps we 
should disallow overlaps with Fundeps (as Hugs does pretty-much)?

I can only understand tricky Fundeps by mentally translating them into type-
level functions (and I was doing that before type families/associated types 
came along).

class C a b| a - b=== type family CF a
instance C a b === type instance CF a = b

And that type instance is rejected because `b' is not in scope.

Currently there are all sorts of tricks in instance declarations with overlaps 
and Fundeps, to achieve the effect of type-level functions. You do end up with 
instance arguments being all typevars, because the instance selection logic is 
really going on inside the constraints, with type-level 'helper functions'.

Some of Oleg's instances are awesome (and almost impenetrable -- the TTypeable 
code is a tour de force).

It's all so *dys-functional* (IMO).

My take is that we should abandon Fundeps, and concentrate on introducing 
overlaps into type functions in a controlled way (what I've called 'dis-
overlapped overlaps'.)

AntC



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fundeps and overlapping instances

2012-05-28 Thread AntC
Gábor Lehel illissius at gmail.com writes:

 
 If you're referring to the NewAxioms work Simon linked to ... [snip]
 ... It seems vaguely similar to a paper on instance chains[2]
 I saw once.

Thanks Gábor for the reference, but I don't think they're very comparable.

The instance chains is in context of fundeps and overlaps (as you'd expect 
from MPJ, since it was him who first introduced fundeps). I know fundeps 
are 'theoretically' equivalent to type families. But I think the instance 
chains paper is a good demonstration of why I find fundeps so awkward to 
follow at the superficial syntax level for type-level programming:
- you have to look first to the end of the instance to find the head;
- and assume (hope!) that the 'result' is the rightmost typevar;
- then backtrack through the list of constraints;
- some are only validation limits on the instance;
- some are calculating intermediate results (again with their result at right).

Instance chains include:
- not only resolving overlaps (yes, that's similar to NewAxioms);
- but also choosing instances based on type class membership;
  (often asked for by newbies, but really difficult to implement)
- and explicit failure leading to backtracking search

(Explicit failure is missing from NewAxioms. I don't mean backtracking, but at 
least treating certain conditions as no instance match. Oleg's HList work 
needs that (for example in the Lacks constraint), but has to fudge it by 
putting a fake instance with a fake constraint.)

Does the overall instance chain structure guarantee termination of type 
inference? I don't follow the algebra enough to be sure.

Morris  Jones' examples seem to me contrived: they've set up overlapping 
instances where you could avoid them, and even where they seem harder to 
follow. Yes, their solution is simpler than the problem they start with; but 
no, the problem didn't need to be that complex in the first case. (I'm looking 
especially at the GCD/Subt/Lte example -- I think I could get that to work in 
Hugs with fundeps and no overlaps.)

I'm surprised there isn't a Monad Transformer example: [3] by MPJ uses 
overlaps for MonadT. And MonadT was (I thought) what gave all the trouble with 
overlaps and default instances and silently changing behaviour. (There's a 
brief example in Morris' supporting survey - ref [11] in [2].)

Anybody out there can explain further?

AntC

 
 [2] http://citeseerx.ist.psu.edu/viewdoc/download?
doi=10.1.1.170.9113rep=rep1type=pdf
 
  [3] Functional Programming with Overloading and Higher-Order Polymorphism
  Mark P. Jones, 1995




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fundeps and overlapping instances

2012-05-26 Thread AntC
Gábor Lehel illissius at gmail.com writes:

 
 On Fri, May 25, 2012 at 7:06 AM, AntC anthony_clayden at clear.net.nz 
wrote:
  But it looks like the work SPJ pointed to is using closed style. ...
 
 If you're referring to the NewAxioms work Simon linked to in the other
 thread, I don't see it explicitly stated that all instances have to be
 within a single module. Especially section 3.3 (Translation) of the
 pdf[1] seems to suggest otherwise. Though it also doesn't seem to be
 the same as what you're asking for. As far as I can tell, with
 NewAxioms, wherever you could currently have a type instance, you
 could instead have a type instance group. ... [snip]

Thanks Gábor, I think you could be right. (It needs some pretty close reading 
of the equations.) I think in this case an example would be worth a thousand 
typevars - double-barred of course.

I told them in Hebrew, I told them in Dutch,
I told them in Latin and Greek,
But I clear forgot (and it vexes me much),
That Haskell is what they speak.

The NewAxioms (draft) paper has a reference to Oleg's HList, but not his Type-
level Typeable, nor to Salzmann  Stuckey (2002), Chameleon, nor the myriad 
discussions in the cafe and Haskell Prime.

It would be nice to see a statement along the lines of: we looked at X, Y and 
Z, and didn't follow that approach because ...; or we believe that approach 
can be incorporated like this ...

I thought it was a good research discipline to start with a literature survey, 
to avoid re-inventing the wheel(?)

 It seems vaguely similar to a paper on instance chains[2]
 I saw once.
 
Thanks, I saw that a while back but didn't look into it much at the time.

There's heaps of approaches out there to type-safe overlaps. Perhaps they're 
all logically equivalent(?) So perhaps we're only bikeshedding about surface 
syntax(??)

AntC






___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fundeps and overlapping instances

2012-05-24 Thread AntC
Simon Peyton-Jones simonpj at microsoft.com writes:

 [from 7 Jul 2010. I've woken up this thread at Oleg's instigation
 http://www.haskell.org/pipermail/haskell-prime/2011-July/003491.html ]
 

I'm not going to talk about Fundeps. This is about introducing overlapping 
instances into type families. But I mean dis-overlapped overlaps, with dis-
equality guards on the instances:
http://www.haskell.org/pipermail/haskell-prime/2012-May/003689.html
This is essentially the same proposal as the July 2011 discussion, but a 
little updated with actual experience of type families in their more mature 
form.

Example type family equations:
type instance F Int = Bool
type instance F a | a /~ Int = [a]  -- explicit dis-equality guard


The July 2010 thread shows how prescient SPJ was about introducing overlaps 
into type families (or FunDeps). The requirements he ends up with are spot-on, 
and I believe I have anticipated them in the proposal for dis-overlapped 
overlaps.

 
 Imagine a system “FDL” that has functional dependencies
 and local type constraints.  The big deal about this is that you get to 
exploit
 type equalities in *given* constraints.  Consider Oleg’s
 example, cut down a bit:
 
 class C a b | a - b
 
 instance C Int Bool
 
 newtype N2 a = N2 (forall b. C a b = b)
 
 t2 :: N2 Int
 
 t2 = N2 True
 
 We end up type-checking (True :: forall b. C Int b =
 b).   From the functional dependency we know that (b~Bool), so the
 function should typecheck.  GHC rejects this program; FDL would not.
 

GHC 7.2.1 still rejects this program, but accepts a version re-written to use 
type families:
type family CF a
type instance CF Int = Bool

newtype N2 a = N2 (CF a)-- could be = N2 (forall b. b ~ CF a = b)

t2 :: N2 Int
t2 = N2 True

 
 But making use of these extra equalities in “given”
 constraints is quite tricky.  To see why look first at ... [snip]
 
SPJ works through 4 examples, gathering tricky and nasty situations that 
are unsound.

The examples involve overlaps, so can't be rewritten to use type families. 
(And GHC rejects attempts to encode them with type classes avoiding fundeps 
+ a functional-dependency-like mechanism (but using equalities) for the 
result type.)

So let me cut to SPJ's conclusions, and compare them against dis-overlapped 
overlaps ...
 
 So FDL must 
 
 ·  embody eager checking for inconsistent instances, across modules
 
 (Type families already implement this, SPJ notes below.)

 Yes: I expect dis-overlapped overlaps to do this.
 (Eager checking is Hugs' approach FWIW, and although at first it seems
  a nuisance, it leads to more 'crisp' development.
  Contrast GHC compiles inconsistent instances, but then you find
  you can't reach them, or get obscure failures.)
 Eager checking also detects and blocks the irksome imported-instances-
 silently-changing-behaviour behaviour.

 ·  never resolve overlap until only a unique instance can possibly
  match
 
 Yes-ish -- instance matching doesn't have to be as strict as that:
 type inference must gather evidence of the dis-equality(s)
 in the guards before matching to the type function equation.
 Because the instances can't overlap, it's safe to apply the instance,
 as soon as the dis-equality(s) are discharged, and the head matches.

 ·  put all overlapping instances in a single module
 
 I don't think we need this, providing each instance 'stands alone'
 with its dis-equality guards.
 Instead, for each imported module, we validate that its instances,
 do not overlap, taking the guards into account.
 [SPJ admits that such a restriction loses part of the point of
  overlap in the first place.]
  
 
 Type families already implement the first of these.   
 I believe that if we added the second and third, then overlap of type 
families would
 be fine.  (I may live to eat my words here.)  
 

AntC


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fundeps and overlapping instances

2012-05-24 Thread AntC
Twan van Laarhoven twanvl at gmail.com writes:

 
 On 24/05/12 14:14, AntC wrote:
  Simon Peyton-Jonessimonpjat  microsoft.com  writes:
 
 
 Have you considered the alternative notation where multiple guards are 
allowed, 
 as in normal function definitions? Something like:
 
  type instance F a
  | a ~ Int   = Bool
  | Otherwise = [a]
 

Hi Twan, there's various style amongst the discussions -- trace the links back 
from my previous post to Haskell-prime.

And see SPJ's surprise (to me) announcement that there's some work in 
progress, which gives something very like it.

But no, I don't like it: it means I can't put different instances in different 
modules (so far as I can tell).


 ..., which would allow you to write closed type functions.

Please explain (because I haven't seen this stated anywhere): what is the use 
case for closed type functions? As opposed to explicitly dis-overlapped type 
functions.

 
 I think this variant is almost equivalent to your proposal ...

No: closed functions mean you have to declare all your instances in the same 
place, in the same module. The whole point of the instance mechanism (or so I 
thought) is that it's expandable.

To see why, consider my example with a 2-argument type function.
http://www.haskell.org/pipermail/haskell-prime/2012-May/003690.html

(I haven't seen enough detail from the closed type func or IFEQ styles to know 
whether we could be 'open' on the first arg, but closed on the second.)

 I also don't know how hard something like this would be to implement. ...

Indeed! I've proposed implication constraints, see
http://www.haskell.org/pipermail/haskell-prime/2012-May/003689.html

That's from the Sulzmann and Stuckey 2002 paper, and I think available for 
type reasoning in such things as Chameleon. Implication Constraints are used 
for the OutsideIn(X) approach to implement GADT's with local constraints. (But 
what I've added is a dis-equality test in the antecedent.)

The evidence we need to satisfy the dis-equality guards does not have to be a 
fully-grounded type, it just needs to be enough that the types can't be equal 
(typically, the outermost constructor).

But it looks like the work SPJ pointed to is using closed style. If all 
they're trying to do is support HList and similar, I guess that's good enough.
 
I tried to explain all this the best part of a year ago. (Admittedly my 
explanation was a bit turgid, re-reading it today. And not that I was saying 
anything that hadn't been said by others -- it's resurfaced several times.) 
Funny how GHC-central just barrels ahead and ignores all those ideas, 
apparently without explaining why.

AntC




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] Higher types in contexts

2012-03-05 Thread AntC
I don't know what you want to do, but you may wrap the (forall a. [a] - 
[a]) in an existential type: 

data ListFunc = forall a. ListFunc ([a] - [a]) 

class Has r Rev ListFunc = HRClass r where 
  setHRClass :: ListFunc - r - r 

Thanks Henning,

What we're wanting to do is set the Higher-ranked function into a record
type, then get it and apply it polymorphically. SPJ's example is:

data HR = HR { rev :: forall a. [a] - [a] }  -- where rev is the
label for the H-R function

f :: HR - ([Bool], [Char])
f r = (r.rev [True], r.rev hello) -- where r.rev
is new syntax to get the func from HR

I've tried that ListFunc wrapping you suggest:

data HR = HR { rev :: ListFunc }

rHR1 = HR{ rev = ListFunc reverse }  -- put the `reverse`
function into the record type
--
the setHRClass method would do this

But I can't 'dig out' the H-R function and apply it (not even
monomorphically):

case (rev rHR1) of { (ListFunc fn) - fn hello }

== Couldn't match type `a' with `Char'
  `a' is a rigid type variable bound by
  a pattern with constructor
ListFunc :: forall a. ([a] - [a]) - ListFunc,
  in a case alternative
  at interactive:0:25
Expected type: [a]
  Actual type: [Char]

SPJ's approach (without a wrapper, but with some fancy instance constraints)
can 'dig out' the function and apply it polymorphically, but he can't get
the function into the record without using an explicit data constructor.


What am I doing wrong?

AntC

--
View this message in context: 
http://haskell.1045720.n5.nabble.com/Re-Haskell-Higher-types-in-contexts-tp5537428p5539147.html
Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] What's the term for this? Alpha-reordering? [was: Re: FreeSect -- generalised sections syntax extension

2012-03-03 Thread AntC
Ras Far rasfar at gmail.com writes:

 I bit premature perhaps but I wanted to post it on a leap day...
 
 http://fremissant.net/freesect
 

Lambda calculus (and therefore Haskell) has a term 'alpha-renaming' for this:

f x y = x + y
g z w = z + w

`f` and `g` are equivalent modulus alpha renaming, because the name of the 
dummy variables in the definition is entirely arbitrary -- they just stand for 
their position.

So in:

subtract y {- from -} x = x - y

I want to say that `subtract` is equivalent to (-) modulo ??? reordering.

Since partial application and the need to reorder arguments is so frequent in 
Haskell (and lambda-calculus), Haskell defines a combinator `flip`, so:

subtract = flip (-)   -- equivalent definition for subtract

So is there an arbitrary greek letter term for reordering?

[Question prompted by a discussion on ghc-users re 'Records in Haskell'. 
There's a proposal using a typeclass with type arguments in a different order 
to the other proposals.]


AntC



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Records in Haskell

2012-03-01 Thread AntC
Evan Laforge qdunkan at gmail.com writes:

 [ ccing the list because the wiki page was flawed and I made a bunch
 of changes, hope you don't mind ]
 

Thanks Evan, I've had a quick read through.

It's a bit difficult to compare to the other proposals.

I can't see discussion of extracting higher-ranked functions and applying them 
in polymorphic contexts. (This is SPJ's `rev` example.)

Putting h-r fields into records is the standard way of emulating object-
oriented style. SPJ's view is that requirement is very important in practice.

(No proposal has a good answer to updating h-r's, which you do discuss.)


Re the cons 1. Still can't have two records with the same field name in the 
same module since it relies on modules for namespacing.

Did you see the DORF precursor page ? 
http://hackage.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFields
/NoMonoRecordFields

I tried to figure out if that would help, but I suspect not. (Looking at the 
desugar for `deriving (Lens)`, you need the H98 field selector functions.) 
Then for me, cons 1. is a show-stopper. (I know you think the opposite.)


I also don't see whether you can 'hide' or make abstract the representation of 
a record type, but still allow read-access to (some of) its fields. Suppose a 
malicious client declares a record with field #a. Can you stop them reading 
and/or updating your field #a whilst still letting them see field #b of your 
record type?


With SDNR, is it possibly to define a polymorphic field selector function? I 
suspect no looking at the desugar for `deriving (Lens)`, but perhaps I've mis-
understood. I mean:
get_a r = ?? #a r -- gets the #a field from any record r

This mechanism then supports the idea of 'virtual' fields -- SPJ's example of 
fullName, built from polymorphic firstName and lastName.


[By the way, did you mean to post to the cafe only? Most of the discussion is 
going on on ghc-users.]


AntC


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-20 Thread AntC
Gábor Lehel illissius at gmail.com writes:

 
 On Mon, Feb 20, 2012 at 4:41 AM, AntC anthony_clayden at clear.net.nz 
wrote:
  Folks, I've put my 'Record in Haskell' proposal on the wiki
  http://hackage.haskell.org/trac/ghc/wiki/Records  as suggestion 5 Declared
  Overloaded Record Fields.
 
  Feedback welcome.
 
 
 I was wondering whether it wouldn't make sense to have some syntax
 within the record itself, instead of at the top level, to declare,
 I'm definitely declaring a new record field, versus I'm definitely
 re-using an existing record field, versus If this record field
 already exists I'm re-using it, otherwise I'm declaring it. ...
 

We're trying to minimise the changes (and be backward compatible, if 
possible), so I think a single compiler option at module level is enough, 
without introducing tricky syntax in the record decls.

Option absent means H98 behaviour.
Option present means _all_ my record decls are using pre-defined record fields.

Note that this only affects the modules where the records (and fieldLabels) 
are declared.

In the application code which uses the records, just apply the field selector 
function to the record, or use familiar record update syntax. You don't have 
to know how the record or fields were declared. (That is, you can import H98 
style records and DORF style records quite happily.)

That suggests the best way to organise the database definitions/decls is:
- base module: data dictionary (fieldLabels only)
- record/structures module(s) grouped by application areas: records only
 plus interface to your data store; plus validation and manip utilities
- application modules: business code
 possibly plus ad-hoc records (may be decl'd H98 style)

Well stap me if that way of organising isn't best practice anyway!



 
 Regarding the polymorphic update / higher-rank fields issues, I'm not
 competent to address them in earnest, but: isn't this primarily an
 ImpredicativeTypes issue? If GHC had full support for
 ImpredicativeTypes (whatever that means), would it work?
 
 ~g
 

Thanks Gábor, neither am I really competent, which is why I asked SPJ to look 
at an early prototype. Since he says it's an unscalable hack, I'll stop there.

At least my proposal uses Has/get/set (with its type-level sugar) and supports 
type-changing update. So (I reckon) it's equal to or ahead of any other 
proposal -- except for those which need whole-scale syntax re-engineering and 
breaking a whole heap of code.

AntC



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-19 Thread AntC
 I'm proposing my record fields so that selectors are just functions. Then
it's 
 independent of dot notation. (It's the semantics I'm far more concerned
 with.) 

Folks, I've put my 'Record in Haskell' proposal on the wiki
http://hackage.haskell.org/trac/ghc/wiki/Records  as suggestion 5 Declared
Overloaded Record Fields.

Thanks to the voiciferousness on this thread, dot notation is completely
optional.

Feedback welcome.

AntC

--
View this message in context: 
http://haskell.1045720.n5.nabble.com/Some-thoughts-on-Type-Directed-Name-Resolution-tp5280846p5498073.html
Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution - record update

2012-02-09 Thread AntC
Donn Cave donn at avvanta.com writes:

 
 Quoth Evan Laforge qdunkan at gmail.com,
 ...
  The non-composing non-abstract updates are what bug me, and
  make me scatter about tons of 'modifyThis' functions, both for
  composability and to protect from field renames.
 
 So ... at the risk of stating the obvious, is it fair to say the root
 of this problem is at least the lack of `first class' update syntax?

No, Donn, it's not the lack of syntax, it's the lack of semantics for first-
class (polymorphic) record update. And there's very little that's obvious. SPJ 
was not very happy with any of this.

SPJ in the SORF proposal asks:
what does e { x = True } mean if there are lots of x fields in scope?
(which is precisely what we want to allow)

So he's supposing some syntax -- where `e' is some expression that evaluates 
to a record. (There's a shorter discussion in the TDNR proposal.)

If Haskell supported polymorphic update semantics (as well as polymorphic 
field selection), you could build for yourself all those update idioms you 
talk about.

More abstractly, can Haskell offer a polymorphic `set' (and `get') method for 
the `Has' class?

set :: (Has r fld t) = fld - t - _r - r
get :: (Has r fld t) = r - fld - t -- fld in record r at type t
-- where fld is a type/Kind that identifies the field

The SORF proposal discusses lots of awkward cases which make polymorphic 
update difficult.

I've built a prototype that hacks round some of those cases. SPJ's view (on a 
quick inspect) is that it's workable in some cases, limited in others, and not 
scalable in general.

Are you/everybody here prepared to give away some of the current record 
features so that you can go poly?

- Do you want to change the type of a record?
  (that's why I've put `_r' in `set's type
   `_r' is the as-was type that we're throwing away.)
  Haskell currently supports changing the type of the record.
  (SPJ doubts whether type-changing has ever been a valuable feature.
   So do I.)

- Do you want to update Higher-rank fields?
  (typically used in records representing OO-style objects)
  Or is it enough to initialise the HR field when you create the record,
   then never change it?
  How many forall'd variables might you like in the HR field?

- Do you want to put constraints on the HR's forall'd types?

This is where the issue is stuck. Very possibly if we agree workable 
constraints, we're going to just run into further difficulties (like type 
inference becoming unmanageable without lots of type annotations to help 
resolve instances).

AntC



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution -record update

2012-02-09 Thread AntC
Donn Cave donn at avvanta.com writes:

 
 
 --  modifyRecord :: RecordType r = (a - a) - (r - a) - r - r
modifyRecord :: RecordType r = (r - a) - (a - a) - r - r
 
 ... while this does obviously represent a polymorphic function,
Exactly!
 if I write
 
 --  data Config { tempo :: Int, ...}
data Config = Config { tempo :: Int, ...}
   f = modifyRecord tempo (+20)
   ...

But f defined like that is exactly what you can't write now (even with the 
args round the same way as the signature ;-), because:
* `tempo' is a function to select a field out of a record, *and only that*.
  So there's no way in the body of modifyRecord to use its (r - a)
  argument to put the updated `a' back into `r'.
* You can't (in current Haskell) put in place of `tempo' any type/species
  of a term that could achieve that update, except by either:
  making modifyRecord in effect monomorphic to Config/tempo,
  or building a polymorphic update system wot we 'ave no' go' (yet).

 ... then f has type Config - Config, it isn't polymorphic.
You can do:
f Config{ tempo, .. } = Config {tempo = tempo + 20, ..}
And that does yield f :: Config - Config

(But I'm sure you knew that.)

OK, we could implement lenses, make `tempo' a lens instead of a selector, 
desugar the update syntax to call the update 'method' out of the lens, ...
And of course somehow arrange the sugar that when `tempo' appears in other 
contexts we take the select 'method'.

You write up the proposal, and think through all the changes it would involve 
over Haskell/GHC as is, and then we can compare it to all those other 
proposals.

I think you'll still find you run into exactly the same difficulties I 
mentioned around update for record changing, Higher-ranked, etc.


 I am however vaguely aware that some parties to the Record
 Question would like to make record fields themselves polymorphic,
 
Yes, for example Jonathan Geddes' post:
 setName n r = r {name = n}
 addMr r = r { name = Mr.  ++ (name r) }

(Jonathan's post is asking for alternative syntax: that's rather ambitious 
when we can't yet write anything like that currently, indeed we don't even 
know how we could implement it in general.)

His context is, presumably, having lots of different record types all with a 
field `name'. (Arguably he should adopt long_and_meaningful_names for his 
various fields.)

 Maybe that's semantically more like overloading,

Yes, I've implemented it as overloading.

 but in any case,
 it isn't strictly necessary in order to support first class updates,
 true?
 
   Donn
 
Well, I think we might be getting stuck here with what does 'first class 
update' mean?

The narrow issue we're trying to address is namespacing, and specifically name 
clashes: two different records with the same named field.

I can't do better than quote SPJ again, sorry (not very) to repeat myself:
 SPJ in the SORF proposal asks:
 what does e { x = True } mean if there are lots of x fields in scope?
 (which is precisely what we want to allow)

It's true that each x is monomorphic (in the sense of being tied to a 
specific record and field type), but at the time the compiler encounters that 
expression, it doesn't know the type of `e'. (In general, `e' is some 
arbitrary expression -- perhaps selecting a record out of a keyed array?)

So the compiler relies on the name x being monomorphic to tell it. In 
contrast, -XDisambiguateRecordFields copes with different xs by insisting 
you put the Record's data constructor in place of the expression `e'.

If we want to turn this into a syntax question, we perhaps need a way of 
putting both an expression and a data constructor in with the field and the 
value to update. But note that the x in { x = True } is sort of hard-coded, 
there's currently no way to put an expression in its place.

So you still can't define a modifyConfig: you couldn't put anything in place 
of its (r - a) parameter that could represent x.

Now in return for me answering that, please answer the questions in my earlier 
post about what limitations on update you'd like:
* record-type changing?
* Higher-ranked fields?
* How many forall'd variables?
* Constrained forall'd variables?

Thank you
AntC


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-06 Thread AntC
Donn Cave donn at avvanta.com writes:

 
 Quoth AntC anthony_clayden at clear.net.nz,
 ...
  We're on the slippery slope! Where will it end?
 
  And now that I've found it, I so love:
 
  customer.lastName.tail.head.toUpper-- Yay!
 
 ... compared to present practice, with where dot is function
 composition only -
 
 (toUpper.head.tail.lastName) customer
 
 So two competing meanings of ., where one is literally the reverse
 of the other.  Of course we won't be able to spell composition
 without spaces any more, so technically the backwards and forward
 sense of . are distinct, but it seems kind of unfortunate anyway.

Thanks Donn. I can see we aren't going to agree on this, so I'll be brief. 
(I'll use my limited time to gather the proposal properly on to a wiki.)

It was a surprise to me that dot without spaces around is still legal syntax 
for function composition. So yes, we're going to break code (and hearts, by 
the sound of it).

I'm proposing my record fields so that selectors are just functions. Then it's 
independent of dot notation. (It's the semantics I'm far more concerned with.)

You (Donn) can then avoid 'switching on' dot as tight-binding reverse func 
apply, and nothing's got broken. (On the other hand, the change in semantics 
is so dramatic switching it on would get compile failures in typing 
expressions, so I don't see any danger of running broken code.)

We could use something other than dot for the purpose (# has been suggested), 
but the trouble is that the user-defined operator space has got used up. I see 
that as part of introducing tight-binding reverse func apply, I also need a 
loose-binding version (counterpart to ($) in the Prelude). (.$) seems most 
natural, but probably that's already extant in user-defined code.

So the advantage of dot (aside from it being familiar from other programming 
paradigms) is that we know the design space isn't used up.

 ...
 
 If you'll consider an idea from the peanut gallery ...  for me, the
 dot notation for fields may as well be spelling as an operator -
 that is, customer.lastName deploys a field named .lastName.

No, I no longer think it's just spelling. (I can see my Yay example is pushing 
the innovation too far too fast.) Examples which might be easier to swallow:

 customer.fullName
 shape.area
 date.dayOfWeek
 name.middleInitial
 list.length

Are all pseudo- or virtual or calculated 'fields'. (Or if not fields, then 
attributes or properties.)

I presume you're not suggesting we have both a function `area' and a pseudo-
field `.area'?

Perhaps we could allow some graphic char as a prefix to field names? (perhaps 
# because it's already allowed as part of magic-hash names?

But it would be part of the name, _not_ an operator.

 customer.#firstName   === (#firstName customer)


AntC



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-06 Thread AntC
Richard O'Keefe ok at cs.otago.ac.nz writes:

 
 
 On 4/02/2012, at 12:13 AM, Gábor Lehel wrote:
  
  All of this said, record.field is still the most readable, intuitive,
  and familiar syntax for selecting a field from a record that I know
  of.
 
 Having learned COBOL and Algol 68 before Haskell was dreamed of,
 I regard
 
   field OF record
 
 as the most readable, intuitive, and familiar syntax.  Given our
 background in reading natural language text, most of us probably
 thought once upon a time that '.' was the most readable, intuitive,
 and familiar syntax for terminating a statement, and in COBOL, NDL,
 and Smalltalk, it _is_.  There's certainly nothing about a dot
 that suggests field selection, *unless* you happen to be familiar
 with a programming language that does it that way. ...
 
Richard, now you're just being playful.

Database access languages used record.field since COBOL days (well certainly 
before SQL in 1969).

Assembler and linker languages often allowed dots within names.
I presume IPv4 dot-decimal comes from this.

I think the use of dot comes from section and sub-section numbering in large 
documents. I have no idea when that dates from, but off the top of my head:

Principia Mathematica, Russell and Whitehead 1910
Tractatus Logico-Philosophicus, Wittgenstein, 1918

(Admittedly Princ Math also uses dot (infix operator) as logical product. As 
well, there's a dot separator between a quantifier's list of bound variables 
(upside-down A, backwards E) and the bound term. Church's lambda notation 
similarly uses a dot to separate the bound variables.)

There is one 'odd man out' when it comes to dot notation:
A few little-known programming languages have for some reason bucked the well-
established convention of small circle for function composition.

There's certainly nothing about a dot that suggests function composition, 
*unless* ...

AntC





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-06 Thread AntC
Donn Cave donn at avvanta.com writes:

 
 You can find stuff like fromIntegral.ord in
 packages downloaded to build cabal-install for example.  It graphically
 appeals to the notion of a function composed of several functions, so
 the programmers in question will likely not even be repentant!

Data.Char.toUpper   -- a name composed of several names
shape.position.xCoord   -- a structure composed of several structures

Here's an off-the-wall idea for the syntactics:
- Where there's a block of names with dot separators (and no spaces).
- The dots must be either all postfix apply or all prefix compose.
- Postpone analysing until we've got some type info for the sub-names.
- The types must interlock either left-to-right or right-to-left.
  So now we know whether we're prefix or postfix.
- Then we can adjust the AST for loose-binding vs tight-binding.
  (As happens for operator precedence.)

?Do we call this Type-Directed Syntax Resolution ;-)

(By the way, natural languages do this sort of stuff all the time. In fact 
they revel in it:
   Eighth Army Push Bottles Up German Rear.
http://languagelog.ldc.upenn.edu/nll/?p=3708  )


The more I think about it, the more the pseudo-fields makes sense, the more I 
want field selectors to be just functions. There's an interesting example in 
Wadler's original paper that became View Patterns Views: A way for pattern 
matching to cohabit with data abstraction [1987], 4. Viewing a complex 
number in cartesian and polar coordinates.

We may want our implementation of complex to be abstract. We provide (pseudo-) 
fields to select the coordinates. Then they're ever-so like methods for an 
(abstract) object.

Also we want the (pseudo-) fields to be updatable, which means field update 
needs to be polymorphic (overloaded). Then all I need is a type-(or kind-) 
level 'peg' for the name, and an instance for Has/get/set.

AntC




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-03 Thread AntC
Kevin Quick quick at sparq.org writes:

 
  Currently under H98:
 f.g-- (both lower case, no space around the dot)
  Is taken as function composition -- same as (f . g).
 f.  g  -- is taken as func composition (f . g)
 f  .g  -- is taken as func composition (f . g)
 
 And so it is.  Could have sworn these weren't accepted, but clearly I'm  
 wrong.  Thanks for pointing this out.
 

On a bit more digging, I'm scaring myself. These are both valid (H98):

 Data.Char.toUpper.Prelude.head.Prelude.tail $ hello   -- Strewth!
 hello.$Prelude.tail.$Prelude.head.$Data.Char.toUpper
 -- using (.$) = flip ($) as fake dot notation
GHCiorHugs== 'E'

The first example is good in that you can mix qualified names in with dot 
notation, and the lexer can bind the module name tighter than dot-as-function-
composition.

It's bad that not only are we proposing changing the meaning of dot, we're 
also changing the direction it binds. If you put in the parens:

 (Data.Char.toUpper.(Prelude.head.(Prelude.tail)))  hello
 ((hello.$Prelude.tail).$Prelude.head).$Data.Char.toUpper

Or perhaps not so bad, left-to-right thinking?

Another syntax change about dot-notation is that it binds tighter **than even 
function application**:

  map toUpper customer.lastName

Desugars to:

  map toUpper (lastName customer)

Compare if that dot were function composition:

  (map toUpper customer) . lastName-- of course this isn't type-valid


But wait! there's more! we can make it worse! A field selector is just a 
function, so I can select a field and apply a function all in one string of 
dots:

  customer.lastName.tail.head.toUpper   -- Yay!!

 
 I was trying to ... *but* also  
 indicate that I specifically want the field selector rather than some  
 arbitrary f.  I wanted to extract the field f of every record in recs but  
 clearly indicate that f was a field selector and not a free function.
 
 And this is finally our difference.  I had wanted the no-space preceeding  
 dot syntax (.f) to specifically indicate I was selecting a field.  ...

You seem to be not alone in wanting some special syntax for applying field 
selectors (see other posts on this thread). H98 field selectors don't do this, 
they're just functions.

And there's me bending over backwards to make all Type-Directed overloaded-
Name Resolution field selectors just functions, so you can mix field selectors 
and functions **without** special syntax. Example Yay!! above.

I'm puzzled why you want different syntax for field selectors. Can you give 
some intuition?

Of course you can adopt a convention in your own code that dot-notation is for 
field selection only. (But you can't legislate for code you're importing.) 
(And Donn Cave wants to be able to ignore dot notation all together.)

AFAIC OO languages lets you put all sorts of weird stuff together with dot 
notation. SPJ's got an example from Java in his TDNR.

I hope it's not because you name your fields and functions with brief, 
cryptic, one-letter codes!! You do have a coding convention in you production 
code to use long_and_meaningful_names, don't you?!

So you can tell `customer' is a customer (record), and `lastName' is a last 
Name (field), etc.


  The issue can  
 be resolved by explicit module namespace notation (ala. Prelude.map v.s.  
 Data.List.map).

I want module namespace notation **as well as** dot notation. This is my 
import from a distant planet example. And it'll work, going by example 
Strewth! above.

 
 In addition, under SORF, SPJ indicated that Dot notation must work in  
 cascades (left-associatively), and with an expression to the left:
r.x
r.x.y
(foo v).y
 
 I assume DORF would also support this as well and that r.x.y.z would  
 desugar to z (y (x r)).

Yes, as per discussion above.

 
 With regards to module namespace notation, neither SORF nor DORF mentions  
 anything that I found, but I'm assuming that the assertion is that it's  
 not needed because of the type-directed resolution.

It's rather the other way round. We want to avoid qualified names, and type-
directed resolution is the mechanism to achieve that ...

Where this 'Records in Haskell' thread started is that currently if you want 
to have the same field name in different records, you have to declare the 
records in different modules, then import them to the same place, and still 
you can only refer to them by putting the module prefix. (Unless you use the -
XDisambiguateRecordFields flag, but this only works within the scope of 
pattern matches and explicit record/data constructors; it doesn't work for the 
free-floating selector functions.)

And on balance, putting module prefixes everywhere is just too cumbersome.

So yes, the plan with SORF and DORF is that you can (mostly) use un-qualified 
names, and the resolution mechanism figures out which record type you're 
talking about.

One difference between DORF and SORF is that I 

Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-03 Thread AntC
Gábor Lehel illissius at gmail.com writes:

 
 On Fri, Feb 3, 2012 at 10:30 AM, AntC anthony_clayden at clear.net.nz 
wrote:
  You seem to be not alone in wanting some special syntax for applying field
  selectors (see other posts on this thread). H98 field selectors don't do 
this,
  they're just functions.
 
 
  I'm puzzled why you want different syntax for field selectors. Can you give
  some intuition?
 
 Here's my problems with allowing postfix application using dot for all
 functions.
 

Thank you Gábor for explaining this so clearly.

I can see that mixing prefix and postfix style would be confusing. I suppose 
in other programming paradigms (like database access) record.field is regarded 
as 'atomic', not as function application. And under my proposal (or SORF or 
TDNR) it's atomic-ish, because the dot binds tighter than **even function 
application**.

We already have in H98 field selection as function application. I'm keen not 
to break that, because then I can use dot notation on H98-style records. And 
I'm very keen that field selection (continue to) be equivalent to function 
application, precisely so that people who prefer prefix notation can carry on 
regardless.

Do people really write code with huge pile-ups of functions prefix upon 
prefix? Wouldn't that be confusing even when it's unidirectional? I've seen 
some examples in other threads mixing dot notation with function composition 
with user-defined operators built with a dot (like . ) and a sprinkling of 
parentheses. They were indeed unreadable, but frankly, I don't think that was 
purely down to the dot notation.


 The first problem is that mixing prefix and postfix function
 application within the same line makes it harder to read. 

I can see that. As you say, it's hopeless if readers have to start in the 
middle somewhere and work outwards, swerving to and fro.

If binding-dot is just (reverse) function application, I can't stop people 
exploiting it for more than field selection, and some functions just 'feel' 
like fields. SPJ gave the examples of:

customer.fullName-- fullName is a function to concat first ++ last
shape.area   -- polymorph area overloded for each shape

And then there's:
datetime.month   -- calculate month from number-of-days format
tuple.fst
string.last
name.middleInitial
address.streetNumber
polar.theta.arctan 

We're on the slippery slope! Where will it end?

And now that I've found it, I so love:

customer.lastName.tail.head.toUpper-- Yay!


I notice that for prefix functions you do sometimes need a bit of trickery to 
deal with partial application and inconvenient order of parameters. Of course 
there's parentheses to help, but there's also a family of combinators, 
especially:
($) -- loose-binding function application
(.) -- function composition

So I'm going to take your post as a challenge: can we build a family of 
combinators for postfix style? The objective is to 'keep up the momentum' left 
to right.

I've already been using one such:
(.$)  = flip ($)  -- looks combinator-ish to me!
(.$!) = flip ($!) -- strict version

customer.lastName .$ tail .$ head .$ toUpper-- Yay.$!

 The other problem is that, in order to make partial application
 convenient, you want to put your function's parameters in the order of
 least specific to most specific. If you want to make postfix
 application convenient, you have to do the reverse.

True-ish. I guess it depends how 'tight' you feel the function binds with it's 
least specific parameters. What's atomic?

 
 For example, take the filter function from the Prelude:
 
 filter :: (a - Bool) - [a] - [a]
 
 But for postfix function application, this latter order is the one you want:
 
 [1..10].filter even
 is a lot more intuitive than
 even.filter [1..10]

Agreed. Easy. How do you like these?:

 [1..10] .$ filter even
 [1..10] .$ filter even .$ sum ^ 2
 [1..10] .$ filter even .$ foldr (+) 0 ^ 2

I'm looking at those thinking 'Oh yes! foldr (+) 0 is atomic-ish'.

 
 ... You'll end up with
 some people preferring postfix notation and writing their functions
 one way, other people preferring partial application and writing their
 functions the other way, and a lot of frustration when people from one
 group want to use functions written by the other.

Yeah, like little-endians vs. big-endians.

 I hope you'll agree
 that writing two versions of every function is not a satisfactory
 solution.

Absolutely! And we've a huge body of code defined in prefix form, we don't 
want to re-engineer that. And there's a whole body of 
mathematics/algebra/logic that uses prefix style.

 
 To finally get around to the point:
 
 All of this said, record.field is still the most readable, intuitive,
 and familiar syntax for selecting a field from a record that I know
 of. It would be nice to have it.

Indeed!

 If we restrict this postfix notation
 to only selecting fields from

Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-03 Thread AntC
Gábor Lehel illissius at gmail.com writes:

 
 On Fri, Feb 3, 2012 at 2:37 PM, AntC anthony_clayden at clear.net.nz 
wrote:
  Do people really write code with huge pile-ups of functions prefix upon
  prefix? Wouldn't that be confusing even when it's unidirectional?
 
 Not really. Pipeline-like chains where you apply each function to the
 result of the previous one are quite common and readable, whether in
 the shell, ..

Thank you for reminding me! Unix Pipelining -- that's where I've seen it. And 
in the shell, the pipelining is postfix.

My (.$) is loose-binding postfix application. But let me do:

(.|) = flip ($)-- same as (.$), but suggestive of the pipe

customer.lastName  -- field select, dot 'allowed' per Gábor
 .| tail   -- function apply, dot not
 .| head
 .| toUpper-- are you warming to it?

[1..10]
 .| filter even
 .| foldr (+) 0
 .| (^ 2)  -- the parens is a bit of a let-down

 
 What -is- a problem is if you
 are forced or encouraged to write confusing code (because there's no
 other way to do it or because it's the path of least resistance),
 which is why I dislike proposals which make postfix application
 mandatory for some purposes, or which make it have different behaviour
 from normal prefix application.

Totally agree, that's one of the things I didn't like about TDNR or SORF. 
That's why I'm trying to support both prefix and dot-notation field selectors.

The main thing, though, I like about field selectors as functions (and nothing 
more) is that we've then got a mechanism for overloading them to select from 
multiple record types, and the mechanism is rock-sold instance resolution, not 
some semi-syntactic/semi-type-driven dodginess.

[I'll let you into a secret about my plan for world domination:
 If field selection is just an (overloaded) function,
 we can apply it to other things than records.
  tuple.fst
 We can turn our data dictionary into a type dictionary:
  newtype Customer_id = Customer_id Int
 We can 'hunt out' the customer_id from a tuple:
  tuple.customer_id
 (Using instance resolution to the only Customer_id in that tuple.)

 And now we've got tuples as anonymous records.
 Crucially: we don't care about the field's position within the tuple.
 We could have two tuples with the same fields, but different order.
 And treat them as equivalent at the type level.
 (What relational theory calls 'union compatible'.)

 End of mad moment.]

 If postfix code can be conveniently written using your (.$) combinator
 (and presumably its extended family), with no changes required to
 existing or future functions, I guess it could all work out. What I'm
 afraid of is that introducing postfix notation results in a pressure
 to make functions convenient to use with it, and then we eventually
 end up in the morass I described.

Totally agree, I think order of parameters in declarations should continue to 
expect prefix style, with least specific first (that is, leftmost).

 I'm not sure what we would need to be able to
 reasonably expect that.
 

I think time for others 'listening in' to develop the family of combinators!




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-01 Thread AntC
Kevin Quick quick at sparq.org writes:

 
 
 On Tue, 31 Jan 2012 23:10:34 -0700, Anthony Clayden  
 anthony_clayden at clear.net.nz wrote:
  I'm proposing x.f is _exactly_ f x. That is, the x.f gets
  desugared at an early phase in compilation.
 
 Anthony,
 
 I think part of the concern people are expressing here is that the above  
 would imply the ability to use point-free style.  But this orthogonality  
 is disavowed by your exception:
 
  A 'one-sided dot doesn't mean anything.
 

Kevin, thank you for helping me clarify my descriptions. I admit my 'proposal' 
is probably a bit hard to follow at the moment, because it lives in a series 
of emails, rather than all in a coherent wiki page.

It's also possibly confusing because there are three differing proposals in 
play, and they all use dot notation for field selection, but they use it 
somewhat differently.

But every proposal supports dot-as-function-composition, providing the dot 
appears with space on both sides.

The discussion with Donn Cave has clarified that under my proposal (but not 
TDNR or SORF), the dot notation is not necessary. Donn is concerned that older 
code might be using dot for function composition in contexts that would be 
ambiguous with field-selection-as-reverse-application.
 http://www.haskell.org/pipermail/haskell-cafe/2012-January/099008.html

So we could make the dot notation a compiler option:
- you either keep with H98 syntax,
  so field selection must be by usual function syntax f x
- or use dot notation so that x.f desugars to f x
  (of course you could still use f x: nothing forces you to use the dot)

Let me give some examples to clarify what I mean by 'one-sided' dot:
   M.f-- no spaces, upper case to left, is qualified name
   x.f-- no spaces, lower case to left, desugars to f x
   x . f  -- spaces both side of dot, is function composition
   x. f   -- space on one side only, what does that mean?
   x .f   -- space on one side only, what does that mean?

In my view, those last two (which I'm calling 'one-sided' dot) are too 
confusing (for the eye, at least). I would reject them as invalid syntax. H98 
might treat them as function composition. (I'm not sure, I wouldn't code like 
that.)

Donn is saying that he doesn't want to break extant code that uses 'one-sided' 
dot. Fair enough. Under my proposal we could make it a compiler option to 
stick with H98 syntax, an which case x.f is function composition (I believe), 
not field selection.

I know Wadler's rule about the disproportionate time spent on lexical syntax. 
SPJ was trying (inter alia) to introduce dot notation to support more OO-type 
thinking. I'm more familiar with dot-as-field-selector from relational 
databases, so I'm keen to introduce it.

But frankly it's a side-show compared to addressing the namespace issues 
around records.


 I haven't read the underlying proposals, ...

No, clearly you haven't from what follows. Pay me (and the other contributors) 
the respect of doing so before wasting my time. I'm a busy person. I 
appreciate the feedback on this forum when it's informed. I appreciate that 
people give their time voluntarily (which is what I'm doing).




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-01 Thread AntC
-
established (range of) meanings in other programming paradigms. If we 
introduce dot-notation into Haskell, we have to try to make it behave like 
those paradigms, but in a 'Haskelly' way.

[To go a little off-topic/out of scope. My gold standard is 
polymorphic/anonymous records with concatenation, merge, projection, 
extension, everything you get in relational algebra. I don't want to use up 
all the design options just getting through the current namespace 
restrictions -- infuriating though they are.]

AntC



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-01-31 Thread AntC
Donn Cave donn at avvanta.com writes:

 
 On 28/01/2012 13:00, Paul R wrote:
 ...
  All this dot syntax magic frankly frightens me. Haskell, as a pure
  functionnal language, requires (and allows !) a programming style that
  just does not mix well with object oriented practices. 
 
 In the glasgow-haskell-users discussion, it has been pointed out (to 
 little apparent effect) that the current notation for access by field
 name, `field record', is naturally functional and is easier to read
 for a functionally trained eye than a postfix `record.field' alternative.
 [snip]
   Donn
 
Donn, I can see the argument Haskell has never been afraid to be different. 
Just because OO does it like that, so what?

But if you read SPJ's discussion in the TDNR proposal, there's a cultural 
connection to OO. My post at the head of this thread put it as focus on the 
object - look for the action.

Of course it's easy to 'fake' postfix function application:
(.$) = flip ($)

But the trouble is that .$ binds weakly. What we want is for the dot to bind 
tighter even than function apply. So:
 crack egg.largeEnd   == crack (largeEnd egg)
(Where == means 'is syntactic sugar for'.)

We're already familiar with the tight-binding dot for qualified names. I 
suppose we're coping with the visual confusion with space-surrounded dot as 
function composition.

But I can see that one more petit bonbon could tip confusion over the edge.

To my eye, one-sided dot application is a bonbon too far.

My proposal is that field selection functions be just ordinary functions, and 
dot notation be just function application(tight-binding). Then:
  object.fieldfuncmethod   == fieldfuncmethod object
(Subject to the tight binding for the dot.)
And one-sided dot application is pointless (! errk I mean 'without purpose', 
no different to writing the bare object or bare fieldfuncmethod).

Then you can write in your familiar style, and can use polymorphic field 
selectors as plain functions (same syntax as presently).

Those under the influence of OO can write dot notation, until they discover 
the joys of pointless style.

AntC


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-01-31 Thread AntC
Donn Cave donn at avvanta.com writes:

 
 Quoth AntC anthony_clayden at clear.net.nz,
 ...
  My proposal is that field selection functions be just ordinary functions, 
and 
  dot notation be just function application(tight-binding). Then:
object.fieldfuncmethod   == fieldfuncmethod object
  (Subject to the tight binding for the dot.)
  And one-sided dot application is pointless (! errk I mean 'without 
purpose', 
  no different to writing the bare object or bare fieldfuncmethod).
 
 That's interesting!  The wiki page on SORF (Simple Overloaded Record Fields,
 http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields)
 has some language that, as I interpreted it, meant that Has/Get syntactic
 sugar depended on the dot, so it's indispensable. 

Yes it does, and that's one of the things I didn't like - hence my counter-
proposal. In particular in SORF, the dot notation got tied into 'virtual 
record selectors'. Now 'virtual record selectors' is a good idea, but SORF 
tied it to the field selection approach, so had to go via a Has instance, 
which introduced a `set' method as well as the get, which didn't make sense, 
so SPJ ran into trouble.

Actually the TDNR proposal was better re the power of the dot: works with 
any function.

As soon as you decide to make 'virtual record selectors' just ordinary 
functions (so they select but not update), then you can see that field names 
are also just ordinary functions (for selection purposes). So the semantics 
for field 'selection' (whether or not you use dot notation) is just function 
application. So Type-Directed Name resolution is just instance resolution. So 
it all gets much easier.

  Your proposal actually
 has some similar language but, I see you don't mean it that way.  That's
 great news for anyone who's really dying to get somewhere with records,
 if it means that the functionality could in principle be introduced
 independently of ...

Yes. Actually, (IMHO) the biggest block to making some progress with 
the 'cottage industry' for records (and there are heaps of ideas out there) is 
that currently the field name appearing in data decls grabs so much of the 
namespace real estate. It creates a global name (selector function) that can't 
be overloaded.

You'll see in my other posts last night (NZ time) that the first thing I think 
should happen is to switch off auto-creation of field selection functions. 
(This should have come along as an option with DisambiguateRecordFields, I 
think. http://www.haskell.org/pipermail/glasgow-haskell-users/2012-
January/021750.html)

 ... changes to the interpretation of . that would break
 a lot of code.
 

Yes, in principle we could introduce the semantics for field-selectors-as-
overloaded-functions without introducing any special syntax for field 
selection (dot notation or whatever). But the 'Records in Haskell' thread 
started with a Reddit/Yesod discussion about records, and the lack of dot 
notation being the last major wart in Haskell. A sentiment open to doubt in 
the words of the poet. It stung SPJ enough to open up the discussion (and I 
guess now is timely as 7.4.1 gets put to bed).

For me, the record/field namespacing is the major wart, polymorphism only 
slightly less, and the notation is a side-issue. But I don't want to lose the 
initiative that's built up, so I'm trying to address both at the same time.

AntC





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-01-26 Thread AntC
Steve Horne sh006d3592 at blueyonder.co.uk writes:

 
 There's a proposal at the moment to add support for TDNR to Haskell
 - to leverage the power of the dot (e.g. for 
intellisense).http://hackage.haskell.org/trac/haskell-
prime/wiki/TypeDirectedNameResolution
 I approve of the goal, ...

Steve, I think that proposal has been rather superseeded by 
http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields, which 
draws on TDNR. But SORF is best seen as an evolving design space, with precise 
details yet to be clarified/agreed. I've put my own variation into the ring: 
http://www.haskell.org/pipermail/glasgow-haskell-users/2011-
December/021298.html -- which seems to have fallen into a black hole :-(

One of the aspects of TDNR that wasn't so popular was that its type-directed 
resolution was very similar to instance resolution, but subtly and confusingly 
different.

I guess we have to be very careful about the dot. It seems to be in a 
very 'crowded' syntax space, so if we implement the wrong way, we could end up 
shutting the door with the keys left inside.

SPJ's observations about how the dot works in other languages are all good 
points. He's arguing that the dot should behave in a familiar way. I'm most 
used to it in SQL as table.column, but I guess for most programmers it's 
object.method. Haskell is already encumbered by Module.name, and g . f 
(function composition with spaces round the dot).

I like the part in OverloadedRecordFields (and TDNR) re user-defined 'virtual' 
fields. (fullName being a concatenation of the datatype fields firstName and 
lastName, area being a calculation over a Shape datatype.) But the point about 
those being virtual is that they're not first-class fields: you can't update 
through them. SPJ got 'stuck' at that point.

My proposal was that restricting the dot to field selection wasted too much of 
the design space. Instead dot should be merely syntactic sugar for reverse 
function application. That is:
   whatever.funcmethod == (funcmethod whatever)
(Note no spaces around the dot. This is syntactically distinct from qualified 
names because the name to the left of the dot begins lower-case.)

Then funcmethod can be a 'real' field selector, or a virtual field or a class 
method or some other function completely.

So to get to name resolution: since dot is (reverse) function application, we 
can use all the usual Haskell type inference/instance selection 'for free'. 
Either/both `whatever' and `funcmethod' could be arguments passed through from 
a distant call, which turned out to be a record type and field selector (not 
recognisable as such from its name). So we'd get polymorphic record and field 
selection 'for free'.

I'd also like to be able to mix the dot with qualified names:
   A.b.(C.D.e.f).G.h == (G.h ((f C.D.e) A.b))
The syntax rule is: an upper-case name to the left of the dot means this is a 
qualified name, and binds most tightly. lower-case to the left means reverse-
function applic. Of course you can use parentheses to group differently.

(Re a one-sided dot I have no intuitions. TDNR includes some options for 
partial application/sections, SORF some more. They seem to me what Wirth would 
call 'rococo'. If dot is to be merely function application, it hardly seems 
worth worrying about.)

How do we get field names to be suitable funcmethods for dot applying to 
records? And how do we support field update? == Subjects for a different post.

There's also an elephant in the room I haven't talked about: TDNR started with 
what happens inside an IDE when you type `x.' and all the possible methods (or 
fields) for x pop up. This follows the philosophy in OO of focus on the 
object - look for the action. (Same thinking as right-click in GUI's. 
Contrast old-style 'green screen' applications where you went down a menu tree 
first (action), then looked for your object.)

If the dot is merely function application, then what follows the dot could 
be 'anything' (including very generic functions like show or return). I plain 
don't know if IDE's can be smart enough to spot that what's to the left of the 
dot is a datatype and offer its fields, or get from its type to its instances 
to their methods. (Actually, under my proposal, datatype to fields is exactly 
datatype to Has instance.) (How) could it tell what are more-specific or more-
generic methods?


 My basic idea is stolen from Bertrand Meyer (Object-Oriented
 Software Construction, second edition). Basically, a class *is* both
 a module and a type. ...

1) Are you sure that C++ classes/instances/methods are comparable enough to 
Haskell's? This is a very confusing area of terminology for object-oriented 
cp. functional languages.

2) Have you looked at GHC 7.4.1 innovations around classes-as-types and 
Constraint kinds?




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org