RE: Overloaded record fields

2013-06-28 Thread Simon Peyton-Jones
| Folks, I'm keenly aware that GSoC has a limited timespan; and that there
| has already been much heat generated on the records debate.

I am also keenly aware of this.  I think the plan Ant outlines below makes 
sense; I'll work on it with Adam.

I have, however, realised why I liked the dot idea.  Consider

f r b = r.foo  b

With dot-notation baked in (non-orthogonally), f would get the type

f :: (r { foo::Bool }) = r - Bool - Bool

With the orthogonal proposal, f is equivalent to
f r b = foo r  b

Now it depends. 

* If there is at least one record in scope with a field foo 
  and no other foo's, then you get the above type

* If there are no records in scope with field foo
  and no other foo's, the program is rejected

* If there are no records in scope with field foo
  but there is a function foo, then the usual thing happens.

This raises the funny possibility that you might have to define a local type
data Unused = U { foo :: Int }
simply so that there *is* at least on foo field in scope.

I wanted to jot this point down, but I think it's a lesser evil than falling 
into the dot-notation swamp.  After all, it must be vanishingly rare to write a 
function manipulating foo fields when there are no such records around. It's 
just a point to note (NB Adam: design document).

Simon

| -Original Message-
| From: glasgow-haskell-users-boun...@haskell.org [mailto:glasgow-haskell-
| users-boun...@haskell.org] On Behalf Of AntC
| Sent: 27 June 2013 13:37
| To: glasgow-haskell-users@haskell.org
| Subject: Re: Overloaded record fields
| 
| 
|  ... the orthogonality is also an important benefit.
|   It could allow people like Edward and others who dislike ...
|   to still use ...
| 
| 
| Folks, I'm keenly aware that GSoC has a limited timespan; and that there
| has already been much heat generated on the records debate.
| 
| Perhaps we could concentrate on giving Adam a 'plan of attack', and help
| resolving any difficulties he runs into. I suggest:
| 
| 1. We postpone trying to use postfix dot:
|It's controversial.
|The syntax looks weird whichever way you cut it.
|It's sugar, whereas we'd rather get going on functionality.
|(This does mean I'm suggesting 'parking' Adam's/Simon's syntax, too.)
| 
| 2. Implement class Has with method getFld, as per Plan.
| 
| 3. Implement the Record field constraints new syntax, per Plan.
| 
| 4. Implicitly generate Has instances for record decls, per Plan.
|Including generating for imported records,
|even if they weren't declared with the extension.
|(Option (2) on-the-fly.)
| 
| 5. Implement Record update, per Plan.
| 
| 6. Support an extension to suppress generating field selector functions.
|This frees the namespace.
|(This is -XNoMonoRecordFields in the Plan,
| but Simon M said he didn't like the 'Mono' in that name.)
|Then lenses could do stuff (via TH?) with the name.
| 
|[Those who've followed so far, will notice that
| I've not yet offered a way to select fields.
| Except with explicit getFld method.
| So this 'extension' is actually 'do nothing'.]
| 
| 7. Implement -XPolyRecordFields, not quite per Plan.
|This generates a poly-record field selector function:
| 
|x :: r {x :: t} = r - t-- Has r x t = ...
|x = getFld
| 
| And means that H98 syntax still works:
| 
|x e -- we must know e's type to pick which instance
| 
| But note that it must generate only one definition
| for the whole module, even if x is declared in multiple data types.
| (Or in both a declared and an imported.)
| 
| But not per the Plan:
| Do _not_ export the generated field selector functions.
| (If an importing module wants field selectors,
|  it must set the extension, and generate them for imported data
| types.
|  Otherwise we risk name clash on the import.
|  This effectively blocks H98-style modules
|  from using the 'new' record selectors, I fear.)
| Or perhaps I mean that the importing module could choose
| whether to bring in the field selector function??
| Or perhaps we export/import-control the selector function
| separately to the record and field name???
| 
| Taking 6. and 7. together means that for the same record decl:
| * one importing module could access it as a lens
| * another could use field selector functions
| 
| 8. (If GSoC hasn't expired yet!)
|Implement ‑XDotPostfixFuncApply as an orthogonal extension ;-).
| 
| AntC
| 
| 
| 
| 
| ___
| Glasgow-haskell-users mailing list
| Glasgow-haskell-users@haskell.org
| http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Overloaded record fields

2013-06-28 Thread Daniel Trstenjak

Hi Evan,

 1 - Add an option to add a 'deriving (Lens)' to record declarations.
 That makes the record declare lenses instead of functions.

Well, no, that's exactly the kind of magic programming language hackery,
that Haskell shouldn't be part of.

Deriving should only add something, but not change the behaviour of the 
underived case.

I'm really for convenience, but it shouldn't be added willy-nilly,
because in the long term this creates more harm.


Greetings,
Daniel

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: PSA: GHC can now be built with Clang

2013-06-28 Thread Simon Marlow

On 26/06/13 04:13, Austin Seipp wrote:

Thanks Manuel!

I have an update on this work (I am also CC'ing glasgow-haskell-users,
as I forgot last time.) The TL;DR is this:

  * HEAD will correctly work with Clang 3.4svn on both Linux, and OS X.
  * I have a small, 6-line patch to Clang to fix the build failure in
primitive (Clang was too eager to stringify something.) Once this fix
is integrated into Clang (hopefully very soon,) it will be possible to
build GHC entirely including all stage2 libraries without any patches.
The patch is here: http://llvm.org/bugs/show_bug.cgi?id=16371 - I am
hoping this will also make it into XCode 5.
  * I still have to eliminate some warnings throughout the build, which
will require fiddling and a bit of refactoring. The testsuite still
probably won't run cleanly on Linux, at least, until this is done I'm
afraid (but then again I haven't tried...)

As for the infamous ticket #7602, the large performance regression on
Mac OS X, I have some numbers finally between my fast-TLS and slow-TLS
approach.

./gc_bench.slow-tls 19 50 5 22 +RTS -H180m -N7 -RTS  395.57s user
173.18s system 138% cpu 6:50.71 total

vs

./gc_bench.fast-tls 19 50 5 22 +RTS -H180m -N7 -RTS  322.98s user
132.37s system 132% cpu 5:44.65 total

Now, this probably looks totally awful from a scalability POV. And,
well, yeah, it is. But I am almost 100% certain there is something
extremely screwy going on with my machine here. I base this on the
fact that during gc_bench, kernel_task was eating up about ~600% of my
CPU consistently, giving user threads no time to run. I've noticed
this with other applications that were totally unrelated too (close
tweetbot - 800% CPU usage,) so I guess it's time to learn DTrace. Or
turn it on and off again or something. Ugh.

Anyway, if you look at the user times, you get a nice 30% speedup
which is about what we expect!


30% better than before is good, but we need some absolute figures. Can 
you validate that against the performance on Linux, or against the 
performance you get when the RTS is compiled with gcc?  If it's hard to 
get a direct comparison on equivalent hardware. you could compare the 
slowdown with -threaded on Linux and OS X.


Cheers,
Simon



On a related note, due to the source code structure at the moment,
Linux/Clang hilariously suffers from this same bug. That's because
while Clang on Linux supports extremely fast TLS via __thread (like
GCC,) it falls back to pthread_getspecific/setspecific. I haven't
fixed this yet. It'll happen after I fix #7602 and get it merged in.
On my Linux machine, gc_bench also sees a consistent 30% speedup
between these two approaches, so I think this is a relatively accurate
measurement. Well, as accurate as I can be without running nofib just
yet. So if you're just dying to have GHC HEAD built with Clang HEAD on
Linux because you've got reasons, you should probably hold on.

I also may have a similar, better approach to fixing #7602 that is not
entirely as evil and sneaky as crashing the WebKit party. I'll follow
up on this soon when I have more info in a separate thread to confer
with Simon. With nofib results. I hope.

But anyway, 7.8 will be shaping up quite nicely - in particular in the
Mac OS X department, I hope. Please feel free to pester me with
questions or if you attempt something and it doesn't work.

On Tue, Jun 25, 2013 at 7:34 PM, Manuel M T Chakravarty
c...@cse.unsw.edu.au wrote:

Austin,

Thank you very much for taking care of all these clang issues — that is very 
helpful!

Cheers,
Manuel

Austin Seipp ase...@pobox.com:

Hi all,

As of commit 5dc74f it should now be possible to build a working
stage1 and stage2 compiler with (an extremely recent) Clang. With some
caveats.

You can just do:

$ CC=/path/to/clang ./configure --with-gcc=/path/to/clang
$ make

I have done this work on Linux. I don't expect much difficulty on Mac
OS X, but it needs testing. Ditto with Windows, although Clang/mingw
is considered experimental.

The current caveats are:

* The testsuite will probably fail everywhere, because of some
warnings that happen during the linking phase when you invoke the
built compiler. So the testsuite runner will probably be unhappy.
Clang is very noisy about unused options, unlike GCC. That needs to be
fixed somewhere in DriverPipeline I'd guess, but with some
refactoring.
* Some of the stage2 libraries don't build due to a Clang bug. These
are vector/primitive/dph so far.
* There is no buildbot or anything to cover it.

You will need a very recent Clang. Due to this bug (preventing
primitive etc from building,) you'll preferably want to use an SVN
checkout from about 6 hours ago at latest:

http://llvm.org/bugs/show_bug.cgi?id=16363

Hilariously, this bug was tripped on primitive's Data.Primitive.Types
module due to some CPP weirdness. But even with a proper bugfix and no
segfault, it still fails to correctly parse this same module with the
same CPP declarations. I'm fairly certain this is 

Re: PSA: GHC can now be built with Clang

2013-06-28 Thread Austin Seipp
Unfortunately the two machines are fairly wildly different in their
hardware characteristics. The OS X machine has 4GB of RAM; 8 Core i7.
The Linux machine has 16GB, but only a 4-core i5. And they have
different clock speeds.

I'll get GCC 4.8 on my OS X machine so I can force a build with it and
compare, but that'll take a while.

Also, as I said, the Linux/Clang build technically has this very same
bug too (GCTDecl.h needs to be modified slightly to fix this, because
it *always* falls back to pthread_getspecific/setspecific currently
with an LLVM based compiler. On Linux, we can just use __thread
instead.) There, compared to a Linux/GCC build the change is
approximately a ~30% difference on -threaded applications like
gc_bench. So the slowdown ratios *seem* relatively consistent.

Anyway, I'll get around to some full nofib runs today if possible, but
the OS X machine will be a little sluggish. I have to do some other
stuff today for this, anyway (like actually getting my second patch to
Clang accepted today, hopefully.)

Interested parties who'd like to see this change, as it stands, can
look at my diff here (it's the 'clang-fast-tls' branch on my GHC
fork): 
https://github.com/thoughtpolice/ghc/commit/88f0a0b047ff67b40eeb4de940aca16271661564.patch

On Fri, Jun 28, 2013 at 4:37 AM, Simon Marlow marlo...@gmail.com wrote:
 On 26/06/13 04:13, Austin Seipp wrote:

 Thanks Manuel!

 I have an update on this work (I am also CC'ing glasgow-haskell-users,
 as I forgot last time.) The TL;DR is this:

   * HEAD will correctly work with Clang 3.4svn on both Linux, and OS X.
   * I have a small, 6-line patch to Clang to fix the build failure in
 primitive (Clang was too eager to stringify something.) Once this fix
 is integrated into Clang (hopefully very soon,) it will be possible to
 build GHC entirely including all stage2 libraries without any patches.
 The patch is here: http://llvm.org/bugs/show_bug.cgi?id=16371 - I am
 hoping this will also make it into XCode 5.
   * I still have to eliminate some warnings throughout the build, which
 will require fiddling and a bit of refactoring. The testsuite still
 probably won't run cleanly on Linux, at least, until this is done I'm
 afraid (but then again I haven't tried...)

 As for the infamous ticket #7602, the large performance regression on
 Mac OS X, I have some numbers finally between my fast-TLS and slow-TLS
 approach.

 ./gc_bench.slow-tls 19 50 5 22 +RTS -H180m -N7 -RTS  395.57s user
 173.18s system 138% cpu 6:50.71 total

 vs

 ./gc_bench.fast-tls 19 50 5 22 +RTS -H180m -N7 -RTS  322.98s user
 132.37s system 132% cpu 5:44.65 total

 Now, this probably looks totally awful from a scalability POV. And,
 well, yeah, it is. But I am almost 100% certain there is something
 extremely screwy going on with my machine here. I base this on the
 fact that during gc_bench, kernel_task was eating up about ~600% of my
 CPU consistently, giving user threads no time to run. I've noticed
 this with other applications that were totally unrelated too (close
 tweetbot - 800% CPU usage,) so I guess it's time to learn DTrace. Or
 turn it on and off again or something. Ugh.

 Anyway, if you look at the user times, you get a nice 30% speedup
 which is about what we expect!


 30% better than before is good, but we need some absolute figures. Can you
 validate that against the performance on Linux, or against the performance
 you get when the RTS is compiled with gcc?  If it's hard to get a direct
 comparison on equivalent hardware. you could compare the slowdown with
 -threaded on Linux and OS X.

 Cheers,
 Simon



 On a related note, due to the source code structure at the moment,
 Linux/Clang hilariously suffers from this same bug. That's because
 while Clang on Linux supports extremely fast TLS via __thread (like
 GCC,) it falls back to pthread_getspecific/setspecific. I haven't
 fixed this yet. It'll happen after I fix #7602 and get it merged in.
 On my Linux machine, gc_bench also sees a consistent 30% speedup
 between these two approaches, so I think this is a relatively accurate
 measurement. Well, as accurate as I can be without running nofib just
 yet. So if you're just dying to have GHC HEAD built with Clang HEAD on
 Linux because you've got reasons, you should probably hold on.

 I also may have a similar, better approach to fixing #7602 that is not
 entirely as evil and sneaky as crashing the WebKit party. I'll follow
 up on this soon when I have more info in a separate thread to confer
 with Simon. With nofib results. I hope.

 But anyway, 7.8 will be shaping up quite nicely - in particular in the
 Mac OS X department, I hope. Please feel free to pester me with
 questions or if you attempt something and it doesn't work.

 On Tue, Jun 25, 2013 at 7:34 PM, Manuel M T Chakravarty
 c...@cse.unsw.edu.au wrote:

 Austin,

 Thank you very much for taking care of all these clang issues — that is
 very helpful!

 Cheers,
 Manuel

 Austin Seipp ase...@pobox.com:

 

Re: Overloaded record fields

2013-06-28 Thread AntC
 Simon Peyton-Jones simonpj at microsoft.com writes:
 
 I have, however, realised why I liked the dot idea.  Consider
 
   f r b = r.foo  b
 

Thanks Simon, I'm a little puzzled what your worry is.

 With dot-notation baked in (non-orthogonally), f would get the type

   f :: (r { foo::Bool }) = r - Bool - Bool
 
 With the orthogonal proposal, f is equivalent to
   f r b = foo r  b
 
 Now it depends. 
 
 * If there is at least one record in scope with a field foo 
   and no other foo's, then you get the above type
 

I don't think the compiler has to go hunting for 'records in scope'.
There is one of two situations in force:

Step 6. -XNoMonoRecordFields  
Then function foo is not defined.
(Or at least not by the record fields mechanism.)
This is exactly so that the program can define
its own access method (perhaps lenses,
 perhaps a function foo with a different type,
 the namespace is free for experiments).

Step 7. -XPolyRecordFields
Then function foo is defined with the same type
as would be for (.foo) in the baked-in approach. IOW

f r b = (.foo) r  b -- baked-in
f r b = foo r  b-- non-baked-in, as you put

foo = getFld :: (r { foo :: Bool } ) = r - Bool

So the type you give would be inferred for function f.

At the use site for f (say applied to record type Bar).
We need:

instance (t ~ Bool) = Has Bar foo t where ...

So generate that on-the-fly.


If the program declares a separate function foo,
then we have 'vanilla' name clash, just like double-declaring any name.
(Just like a H98 record with field foo, then declaring a function foo.)


Or is the potential difficulty something like this:

+ function f is defined as above in a module with -XPolyRecordFields.
+ function f is exported/imported.
+ the importing module also uses -XPolyRecordFields.
+ now in the importing module we try to apply f to a record.
  (say type Baz, not having field foo)
+ the compiler sees the (r { foo :: Bool }) constraint from f.

The compiler tries to generate on-the-fly:

instance (t ~ Bool) = Has Baz foo t where
getFld (MkBaz { foo = foo }) = foo  -- no such field

But this could happen within a single module.
At this point, we need Adam to issue a really clear error message.


Or perhaps the importing module uses H98 records.
And it applies f to a record type Baz.
And there is a field foo type Bool in data type Baz.
Then there's a function:

foo :: Baz - Bool   -- H98 field selector

Now we _could_ generate an instance `Has Baz foo t`.
And it wouldn't clash with Mono field selector foo.

But the extension is switched off. So we'll get:

No instance `Has Baz foo t` arising from the use of `f` ...



(It's this scenario that led me to suggest in step 7
that when exporting field foo,
_don't_ export field selector function foo.)


 
 This raises the funny possibility that you might have to define a local 
type
   data Unused = U { foo :: Int }
 simply so that there *is* at least on foo field in scope.
 

No, I don't see that funny decls are needed.


AntC

 
 | -Original Message-
 | From: glasgow-haskell-users On Behalf Of AntC
 | Sent: 27 June 2013 13:37
 | 
 | 7. Implement -XPolyRecordFields, not quite per Plan.
 |This generates a poly-record field selector function:
 | 
 |x :: r {x :: t} = r - t-- Has r x t = ...
 |x = getFld
 | 
 | And means that H98 syntax still works:
 | 
 |x e -- we must know e's type to pick which instance
 | 
 | But note that it must generate only one definition
 | for the whole module, even if x is declared in multiple data types.
 | (Or in both a declared and an imported.)
 | 
 | But not per the Plan:
 | Do _not_ export the generated field selector functions.
 | (If an importing module wants field selectors,
 |  it must set the extension, and generate them for imported data
 | types.
 |  Otherwise we risk name clash on the import.
 |  This effectively blocks H98-style modules
 |  from using the 'new' record selectors, I fear.)
 | Or perhaps I mean that the importing module could choose
 | whether to bring in the field selector function??
 | Or perhaps we export/import-control the selector function
 | separately to the record and field name???
 | 



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Overloaded record fields

2013-06-28 Thread Malcolm Wallace

On 28 Jun 2013, at 12:16, AntC wrote:

 Thanks Simon, I'm a little puzzled what your worry is.
 
  f r b = r.foo  b

 With dot-notation baked in (non-orthogonally), f would get the type
 
  f :: (r { foo::Bool }) = r - Bool - Bool
 
 With the orthogonal proposal, f is equivalent to
  f r b = foo r  b


I believe Simon's point is that, if dot is special, we can infer the Has type 
above, even if the compiler is not currently aware of any actual record types 
that contain a foo field.  If dot is not special, then there *must* be some 
record containing foo already in scope, otherwise you cannot infer that type 
- you would get a name not in scope error instead.

The former case, where you can use a selector for a record that is not even 
defined yet, leads to good library separation.  The latter case couples 
somewhat-polymorphic record selectors to actual definitions.

Unless you require the type signature to be explicit, instead of inferred.

(For the record, I deeply dislike making dot special, so I would personally go 
for requiring the explicit type signature in this situation.)

Regards,
Malcolm
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Overloaded record fields

2013-06-28 Thread AntC
 Malcolm Wallace malcolm.wallace at me.com writes:
 
  
  With the orthogonal proposal, f is equivalent to
 f r b = foo r  b
 
 I believe Simon's point is that, if dot is special, we can infer 
the Has type above, even if the compiler is
 not currently aware of any actual record types that contain a foo 
field.

Thanks Malcolm, yes I think I do understand what Simon had in mind.
In effect .foo is a kind of literal.
It 'stands for' the String type foo :: Symbol parameter to Has.
(And that's very odd, as SPJ's SORF write-up points out, because that 
isn't an explicit parameter to getFld.)

But contrast H98 field selector functions. They're regular functions, 
nothing about them to show they're specific to a record decl. And they 
work (apart from the non-overloadability).

So all we're doing is moving to foo being an overloaded field selection 
function. And it's a regular overloaded function, which resolves through 
instance matching.


  If dot is not special, then there
 *must* be some record containing foo already in scope, ...

I think you have it the wrong way round.
Field selector function foo must be in scope.
(Or rather what I mean is that name foo must be in scope,
and it's in-scope binding must be to a field selector.)

And function foo must be in scope because there's a record in scope with 
field foo, that generated the function via -XPolyRecordFields.


 
 ..., where you can use a selector for a record that is not
 even defined yet, leads to good library separation.

You can't do that currently. So I think you're asking for something beyond 
Simon's smallest increment.

 
 Unless you require the type signature to be explicit, instead of 
inferred.

Well, I think that's reasonable to require a signature if you use a 
selector for a record that is not even defined yet. I'm not convinced 
there's a strong enough use case to try to support auto type inference. 
Simon said vanishingly rare.


 
 (For the record, I deeply dislike making dot special, so I would 
personally go for requiring the explicit
 type signature in this situation.)
 
 Regards,
 Malcolm
 





___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Overloaded record fields

2013-06-28 Thread Dominique Devriese
Simon,

I see your point.  Essentially, the original proposal keeps the
namespace for field names syntactically distinguishable from that of
functions, so that the type given to r.foo doesn't depend on what is
in scope.  (.foo) is always defined and it is always a function of
type (r { foo::t }) = r - t. With the orthogonal proposal, it
would only be defined if there is a record with a foo field in scope,
although its definition or type does not actually depend on the
record.   One would then need to define an Unused record with a field
foo, or declare the following
  foo :: r { foo ::t} = r - t
  foo = getFld
to essentially declare that foo should be treated as a field selector
and I'm not even sure if type inference would work for this
definition... Maybe we could provide syntax like a declaration field
foo; as equivalent to the latter, but I have to acknowledge that this
is a downside for the orthogonal proposal.

Regards,
Dominique

2013/6/28 Simon Peyton-Jones simo...@microsoft.com:
 | Folks, I'm keenly aware that GSoC has a limited timespan; and that there
 | has already been much heat generated on the records debate.

 I am also keenly aware of this.  I think the plan Ant outlines below makes 
 sense; I'll work on it with Adam.

 I have, however, realised why I liked the dot idea.  Consider

 f r b = r.foo  b

 With dot-notation baked in (non-orthogonally), f would get the type

 f :: (r { foo::Bool }) = r - Bool - Bool

 With the orthogonal proposal, f is equivalent to
 f r b = foo r  b

 Now it depends.

 * If there is at least one record in scope with a field foo
   and no other foo's, then you get the above type

 * If there are no records in scope with field foo
   and no other foo's, the program is rejected

 * If there are no records in scope with field foo
   but there is a function foo, then the usual thing happens.

 This raises the funny possibility that you might have to define a local type
 data Unused = U { foo :: Int }
 simply so that there *is* at least on foo field in scope.

 I wanted to jot this point down, but I think it's a lesser evil than falling 
 into the dot-notation swamp.  After all, it must be vanishingly rare to write 
 a function manipulating foo fields when there are no such records around. 
 It's just a point to note (NB Adam: design document).

 Simon

 | -Original Message-
 | From: glasgow-haskell-users-boun...@haskell.org [mailto:glasgow-haskell-
 | users-boun...@haskell.org] On Behalf Of AntC
 | Sent: 27 June 2013 13:37
 | To: glasgow-haskell-users@haskell.org
 | Subject: Re: Overloaded record fields
 |
 | 
 |  ... the orthogonality is also an important benefit.
 |   It could allow people like Edward and others who dislike ...
 |   to still use ...
 | 
 |
 | Folks, I'm keenly aware that GSoC has a limited timespan; and that there
 | has already been much heat generated on the records debate.
 |
 | Perhaps we could concentrate on giving Adam a 'plan of attack', and help
 | resolving any difficulties he runs into. I suggest:
 |
 | 1. We postpone trying to use postfix dot:
 |It's controversial.
 |The syntax looks weird whichever way you cut it.
 |It's sugar, whereas we'd rather get going on functionality.
 |(This does mean I'm suggesting 'parking' Adam's/Simon's syntax, too.)
 |
 | 2. Implement class Has with method getFld, as per Plan.
 |
 | 3. Implement the Record field constraints new syntax, per Plan.
 |
 | 4. Implicitly generate Has instances for record decls, per Plan.
 |Including generating for imported records,
 |even if they weren't declared with the extension.
 |(Option (2) on-the-fly.)
 |
 | 5. Implement Record update, per Plan.
 |
 | 6. Support an extension to suppress generating field selector functions.
 |This frees the namespace.
 |(This is -XNoMonoRecordFields in the Plan,
 | but Simon M said he didn't like the 'Mono' in that name.)
 |Then lenses could do stuff (via TH?) with the name.
 |
 |[Those who've followed so far, will notice that
 | I've not yet offered a way to select fields.
 | Except with explicit getFld method.
 | So this 'extension' is actually 'do nothing'.]
 |
 | 7. Implement -XPolyRecordFields, not quite per Plan.
 |This generates a poly-record field selector function:
 |
 |x :: r {x :: t} = r - t-- Has r x t = ...
 |x = getFld
 |
 | And means that H98 syntax still works:
 |
 |x e -- we must know e's type to pick which instance
 |
 | But note that it must generate only one definition
 | for the whole module, even if x is declared in multiple data types.
 | (Or in both a declared and an imported.)
 |
 | But not per the Plan:
 | Do _not_ export the generated field selector functions.
 | (If an importing module wants field selectors,
 |  it must set the extension, and generate them for imported data
 | types.
 |  Otherwise we risk name clash on the import.
 |  

Who uses Travis CI and can help write a cookbook for those guys?

2013-06-28 Thread Ryan Newton
The Travis folks have decided they want to support Haskell better (multiple
compiler versions):

  https://github.com/travis-ci/travis-ci/issues/882#issuecomment-20165378

(Yay!)  They're asking for someone to help them up with setup scripts.
 They mention their cookbook collection here:

   https://github.com/travis-ci/travis-cookbooks

In that thread above, I pasted our little script that fetches and installs
multiple GHC versions, but I have little experiences with cloud
technologies  VMs.  Can someone jump in and help push this forward?

As a community I'm sure it would be great to get a higher percentage of
Hackage packages using simple, hosted continuous testing... I'd personally
like to replace my Jenkins install if they can get the necessary GHC
versions in there.

Best,
  -Ryan
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users