Re: In opposition of Functor as super-class of Monad

2012-10-24 Thread Duncan Coutts
On 24 October 2012 11:16, S. Doaitse Swierstra doai...@swierstra.net wrote:
 There are very good reasons for not following this road; indeed everything 
 which is a Monad can also be made an instance of Applicative. But more often 
 than not we want to have a more specific implementation. Because Applicative 
 is less general, there is in general more that you can do with it.

I don't think anyone is suggesting that we force all type that are
both Monad and Applicative to use (*)  = ap as the implementation.
As you say, that'd be crazy.

The details and differences between the various superclass proposals
are to do with how you provide the explicit instance vs getting the
default.

The wiki page explains it and links to the other similar proposals:

http://hackage.haskell.org/trac/ghc/wiki/DefaultSuperclassInstances

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: String != [Char]

2012-03-19 Thread Duncan Coutts
On 17 March 2012 01:44, Greg Weber g...@gregweber.info wrote:
 the text library and Text data type have shown the worth in real world
 Haskell usage with GHC.
 I try to avoid String whenever possible, but I still have to deal with
 conversions and other issues.
 There is a lot of real work to be done to convert away from [Char],
 but I think we need to take it out of the language definition as a
 first step.

I'm pretty sure the majoirty of people would agree that if we were
making the Haskell standard nowadays we'd make String type abstract.

Unfortunately I fear making the change now will be quite disruptive,
though I don't think we've collectively put much effort yet into
working out just how disruptive.

In principle I'd support changing to reduce the number of string types
used in interfaces. From painful professional experience, I think that
one of the biggest things where C++ went wrong was not having a single
string type that everyone would use (I once had to write a C++
component integrating code that used 5 different string types). Like
Python 3, we should have two common string types used in interfaces:
string and bytes (with implementations like our current Text and
ByteString).

BTW, I don't think taking it out of the langauge would be a helpful
step. We actually want to tell people use *this* string type in
interfaces, not leave everyone to make their own choice. I think
taking it out of the language would tend to encourage everyone to make
their own choice.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: Define UTF-8 to be the encoding of Haskell source files

2011-04-17 Thread Duncan Coutts
On Thu, 2011-04-07 at 15:44 +0200, Roel van Dijk wrote:
 On 7 April 2011 14:11, Duncan Coutts duncan.cou...@googlemail.com wrote:
  I would be happy to work with you and others to develop the report text
  for such a proposal. I posted my first draft already :-)
 
 What would be a good way to proceed? Looking at the process I think we
 should create a wiki page and a ticket for this proposal. If necessary
 I'll volunteer to be the proposal owner.

Ok, I can give you permissions on the wiki. What is your username on the
haskell-prime wiki?

Duncan


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: Define UTF-8 to be the encoding of Haskell source files

2011-04-06 Thread Duncan Coutts
On 4 April 2011 23:48, Roel van Dijk vandijk.r...@gmail.com wrote:
 * Proposal

 The Haskell 2010 language specification states that: Haskell uses the
 Unicode character set [2]. It does not state what encoding should be
 used. This means, strictly speaking, it is not possible to reliably
 exchange Haskell source files on the byte level.

 I propose to make UTF-8 the only allowed encoding for Haskell source
 files. Implementations must discard an initial Byte Order Mark (BOM)
 if present [3].

 * Next step

 Discussion! There was already some discussion on the haskell-cafe
 mailing list [7].

This is a simple and obviously sensible proposal. I'm certainly in favour.

I think the only area where there might be some issue to discuss is
the language of the report. As far as I can see, the report does not
require that modules exist as files, does not require the .hs
extension and does not give the standard mapping from module name to
file name.

So since the goal is interoperability of source files then perhaps we
should also have a section somewhere with interoperability guidelines
for implementations that do store Haskell programs as OS files. The
section would describe the one module per file convention, the .hs
extension (this is already obliquely mentioned in the section on
literate Haskell syntax) and the mapping of module names to file names
in common OS file systems. Then this UTF8 stipulation could go there
(and it would be clear that it applies only to conventional
implementations that store Haskell programs as files).

e.g.

Interoperability Guidelines


This Report does not specify how Haskell programs are represented or
stored. There is however a conventional representation using OS files.
Implementations that conform to these guidelines will benefit from the
portability of Haskell program representations.

Haskell modules are stored as files, one module per file. These
Haskell source files are given the file extension .hs for usual
Haskell files and .lhs for literate Haskell files (see section
10.4).

Source files must be encoded as UTF-8 \cite{utf8}. Implementations
must discard an initial Byte Order Mark (BOM) if present.

To find a source file corresponding to a module name used in an import
declaration, the following mapping from module name to OS file name is
used. The '.' character is mapped to the OS's directory separator
string while all other characters map to themselves. The .hs or
.lhs extension is added. Where both .hs and .lhs files exist for
the same module, the .lhs one should be used. The OS's standard
convention for representing Unicode file names should be used.

For example, on a UNIX based OS, the module A.B would map to the file
name A/B.hs for a normal Haskell file or to A/B.lhs for a literate
Haskell file. Note that because it is rare for a Main module to be
imported, there is no restriction on the name of the file containing
the Main module. It is conventional, but not strictly necessary, that
the Main module use the .hs or .lhs extension.


Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: Define UTF-8 to be the encoding of Haskell source files

2011-04-06 Thread Duncan Coutts
On Wed, 2011-04-06 at 16:09 +0100, Ben Millwood wrote:
 On Wed, Apr 6, 2011 at 2:13 PM, Duncan Coutts
 duncan.cou...@googlemail.com wrote:
 
  Interoperability Guidelines
  
 
  [...]
 
  To find a source file corresponding to a module name used in an import
  declaration, the following mapping from module name to OS file name is
  used. The '.' character is mapped to the OS's directory separator
  string while all other characters map to themselves. The .hs or
  .lhs extension is added. Where both .hs and .lhs files exist for
  the same module, the .lhs one should be used. The OS's standard
  convention for representing Unicode file names should be used.
 
 
 This standard isn't quite universal. For example, jhc will look for
 Data.Foo in Data/Foo.hs but also Data.Foo.hs [1]. We could take this
 as an opportunity to discuss that practice, or we could try to make
 the changes to the report orthogonal to that issue.

Indeed. But it's true to say that if you do support the common
convention then you get portability. This does not preclude JHC from
supporting something extra, but sources that take advantage of JHC's
extension are not portable to implementations that just use the common
convention.

 In some sense I think it's cute that the Report doesn't specify
 anything about how Haskell modules are stored or represented, but I
 don't think that freedom is actually used, so I'm happy to see it go.
 I'd think, though, that in that case there would be more to discuss
 than just the encoding, so if we could separate out the issues here, I
 think that would be useful.

It's not going. I hope I was clear in the example text that the
interoperability guidelines were not forcing implementations to use
files etc, just that if they do, if they uses these conventions then
sources will be portable between implementations.

It doesn't stop an implementation using URLs, sticking multiple modules
in a file or keeping modules in a database.

Duncan


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Haskell 2010 libraries

2010-05-04 Thread Duncan Coutts
On Fri, 2010-04-30 at 10:42 +0100, Simon Marlow wrote:

 Here are some options:
 
3. allow packages to shadow each other, so haskell2010 shadows
   base.  This is a tantalising possibility, but I don't have
   any idea what it would look like, e.g. should the client or
   the package provider specify shadowing?

Note that we already have some notion of shadowing. Modules found in
local .hs files shadow modules from packages.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Haskell 2010 libraries

2010-05-04 Thread Duncan Coutts
On Fri, 2010-04-30 at 10:42 +0100, Simon Marlow wrote:
 Hi Folks,
 
 I'm editing the Haskell 2010 report right now, and trying to decide what 
 to do about the libraries.  During the Haskell 2010 process the 
 committee agreed that the libraries in the report should be updated, 
 using the current hierarchical names, adding new functionality from the 
 current base package, and dropping some of the H'98 library modules that 
 now have better alternatives.
 
 In Haskell 2010 we're also adding the FFI modules.  The FFI addendum 
 used non-hierarchical names (CForeign, MarshalAlloc etc.) but these are 
 usually known by their hierarchical names nowadays: e.g. Foreign.C, 
 Foreign.Marshal.Alloc.  It would seem strange to add the 
 non-hierarchical names to the Haskell language report.
 
 So this is all fine from the point of view of the Haskell report - I can 
 certainly update the report to use the hierarchical module names, but 
 that presents us with one or two problems in the implementation.
 
 However, what happens when someone wants to write some code that uses
 Haskell 2010 libraries, but also uses something else from base, say 
 Control.Concurrent?  The modules from haskell2010 overlap with those 
 from base, so all the imports of Haskell 2010 modules will be ambiguous.

   The Prelude is a bit of a thorny issue too: currently it is in base, 
 but we would have to move it to haskell2010.

This problem with the Prelude also already exists. It is currently not
possible to write a H98-only program that depends only on the haskell98
package and not on the base package, because the Prelude is exported
from base and not from haskell98.

 Bear in mind these goals: we want to
 
a. support writing code that is Haskell 2010 only: it only uses
   Haskell 2010 language features and modules.
 
b. not break existing code as far as possible
 
c. whatever we do should extend smoothly when H'2011 makes
   further changes, and so on.
 
 Here are some non-options:
 
1. Not have a haskell2010 package.  We lose (a) above, and we
   lose the ability to add or change the API for these modules,
   in base, since they have to conform to the H'2010 spec.  If
   H'2011 makes any changes to these modules, we're really stuck.
 
2. As described above: you can either use haskell2010, or base,
   but not both.  It would be painful to use haskell2010 in
   GHCi, none of the base modules would be available.
 
 Here are some options:
 
3. allow packages to shadow each other, so haskell2010 shadows
   base.  This is a tantalising possibility, but I don't have
   any idea what it would look like, e.g. should the client or
   the package provider specify shadowing?

So one option is simply to have the client specify shadowing by the
order in which packages are listed on the command line / in the .cabal
file (or some other compiler-dependent mechanism).

If people think the order in the .cabal file is not sufficiently
explicit then I'm sure we can concoct some more explicit syntax. We
already need to add some syntax to allow a package to depend on multiple
versions of a single dependency.

The advantage of the client doing it is it's quite general. The downside
is it's quite general: people can do it anywhere and can easily get
incompatible collections of types. For example base:Prelude.Int would
only be the same as haskell2010:Prelude.Int because it is explicitly set
up to be that way. Arbitrary shadowing would not be so co-operative.


The provider doing it seems fairly attractive. Cases of co-operative
overlapping have to be explicitly constructed by the providing packages
anyway (see e.g. base3 and base4).

I'm not quite sure how it would be implemented but from the user's point
of view they just list the package dependencies as usual and get the
sensible overlapping order. Presumably packages not designed to be used
in an overlapping way should still give an error message.

The provider doing it rather than the client should avoid the user
having to think too much or there being too many opportunities to do
foolish and confusing things. Only the sensible combinations should
work.

 Thoughts?  Better ideas?

So I think I quite like option 3. I doesn't sound to me as complicated
or as subtle as Malcolm seems to fear.

If I write:

build-depends: base, haskell2010

then since haskell2010 has been explicitly set up for this overlapping
to be allowed, then we get haskell2010 shadowing base (irrespective of
the order in which the client lists the packages).

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Unsafe hGetContents

2009-10-20 Thread Duncan Coutts
On Tue, 2009-10-20 at 13:58 +0100, Simon Marlow wrote:

 Duncan has found a definition of hGetContents that explains why it has 
 surprising behaviour, and that's very nice because it lets us write the 
 compilers that we want to write, and we get to tell the users to stop 
 moaning because the strange behaviour they're experiencing is allowed 
 according to the spec.  :-)

:-)

 Of course, the problem is that users don't want the hGetContents that 
 has non-deterministic semantics, they want a deterministic one.  And for 
 that, they want to fix the evaluation order (or something).  The obvious 
 drawback with fixing the evaluation order is that it ties the hands of 
 the compiler developers, and makes a fundamental change to the language 
 definition.

I've not yet seen anyone put forward any practical programs that have
confusing behaviour but were not written deliberately to be as wacky as
possible and avoid all the safety mechanism.

The standard use case for hGetContents is reading a read-only file, or
stdin where it really does not matter when the read actions occur with
respect to other IO actions. You could do it in parallel rather than
on-demand and it'd still be ok.

There's the beginner mistake where people don't notice that they're not
actually demanding anything before closing the file, that's nothing new
of course.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Unsafe hGetContents

2009-10-20 Thread Duncan Coutts
On Tue, 2009-10-20 at 15:45 +0100, Simon Marlow wrote:

  I've not yet seen anyone put forward any practical programs that have
  confusing behaviour but were not written deliberately to be as wacky as
  possible and avoid all the safety mechanism.
 
  The standard use case for hGetContents is reading a read-only file, or
  stdin where it really does not matter when the read actions occur with
  respect to other IO actions. You could do it in parallel rather than
  on-demand and it'd still be ok.
 
  There's the beginner mistake where people don't notice that they're not
  actually demanding anything before closing the file, that's nothing new
  of course.
 
 If the parallel runtime reads files eagerly, that might hide a resource 
 problem that would occur when the program is run on a sequential system, 
 for example.

That's true, but we have the same problem without doing any IO. There
are many ways of generating large amounts of data.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Unsafe hGetContents

2009-10-06 Thread Duncan Coutts
On Tue, 2009-10-06 at 15:18 +0200, Nicolas Pouillard wrote:

  The reason it's hard is that to demonstrate a difference you have to get 
  the lazy I/O to commute with some other I/O, and GHC will never do that. 
If you find a way to do it, then we'll probably consider it a bug in GHC.
  
  You can get lazy I/O to commute with other lazy I/O, and perhaps with 
  some cunning arrangement of pipes (or something) that might be a way to 
  solve the puzzle.  Good luck!
 
 Oleg's example is quite close, don't you think?
 
 URL: http://www.haskell.org/pipermail/haskell/2009-March/021064.html


I didn't think that showed very much. He showed two different runs of
two different IO programs where he got different results after having
bypassed the safety switch on hGetContents.

It shows that lazy IO is non-deterministic, but then we knew that. It
didn't show anything was impure.

As a software engineering thing, it's recommended to use lazy IO in the
cases where the non-determinism has a low impact, ie where the order of
the actions with respect to other actions doesn't really matter. When it
does matter then your programs will probably be more comprehensible if
you do the actions more explicitly.

For example we have the shoot-yourself-in-the-foot restriction that you
can only use hGetContents on a handle a single time (this is the safety
mechanism that Oleg turned off) and after that you cannot write to the
same handle. That's not because it'd be semantically unsound if those
restrictions were not there, but it would let you write some jolly
confusing non-deterministic programs.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Strongly Specify Alignment for FFI Allocation

2009-09-28 Thread Duncan Coutts
On Sat, 2009-09-26 at 04:20 +0100, Brandon S. Allbery KF8NH wrote:
 On Sep 25, 2009, at 07:54 , Duncan Coutts wrote:
  pessimistic. We could do better on machines that are tolerant of
  misaligned memory accesses such as x86. We'd need to use cpp to switch
 
 
 Hm.  I thought x86 could be tolerant (depending on a cpu configuration  
 bit) but the result was so slow that it wasn't worth it?

It's slow and you would not want to do it much, however I think it's
still comparable in speed to doing a series of byte reads/writes and
using bit twiddling to convert to/from the larger word type. It's
probably also faster to do an unaliged operation sometimes than to do an
alignment test each time and call a special unaliged version.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Strongly Specify Alignment for FFI Allocation

2009-09-25 Thread Duncan Coutts
On Thu, 2009-09-24 at 23:13 +0100, Don Stewart wrote:

  It would be beneficial if this wording was applied to all allocation
  routines - such as mallocForeignPtrBytes, mallocForeignPtrArray, etc.
  For the curious, this proposal was born from the real-world issue of
  pulling Word32's from a ByteString in an efficient but portable manner
  (binary is portable but inefficient, a straight forward
  unsafePerformIO/peek is efficient but need alignment).
 
 As a side issue, the get/put primitives on Data.Binary should be
 efficient (though they're about twice as fast when specialized to a
 strict bytestring... stay tuned for a package in this area).

They are efficient within the constraint of doing byte reads and
reconstructing a multi-byte word using bit twiddling.

eg:

getWord16be :: Get Word16
getWord16be = do
s - readN 2 id
return $! (fromIntegral (s `B.index` 0) `shiftl_w16` 8) .|.
  (fromIntegral (s `B.index` 1))

Where as reading an aligned word directly is rather faster. The problem
is that the binary API cannot guarantee alignment so we have to be
pessimistic. We could do better on machines that are tolerant of
misaligned memory accesses such as x86. We'd need to use cpp to switch
between two implementations depending on if the arch supports misaligned
memory access and if it's big or little endian.

#ifdef ARCH_ALLOWS_MISALIGNED_MEMORY_ACCESS

#ifdef ARCH_LITTLE_ENDIAN
getWord32le = getWord32host
#else
getWord32le = ...
#endif

etc

Note also that currently the host order binary ops are not documented as
requiring alignment, but they do. They will fail eg on sparc or ppc for
misaligned access.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell'-private] StricterLabelledFieldSyntax

2009-08-10 Thread Duncan Coutts
On Sun, 2009-07-26 at 02:34 +0100, Ian Lynagh wrote:
 Hi all,
 
 I've made a ticket and proposal page for making the labelled field
 syntax stricter, e.g. making this illegal:
 
 data A = A {x :: Int}
 
 y :: Maybe A
 y = Just A {x = 5}
 
 and requiring this instead:
 
 data A = A {x :: Int}
 
 y :: Maybe A
 y = Just (A {x = 5})

I think I don't like it. It makes the labelled function argument trick
much less nice syntactically.

... - createProcess proc { cwd = Just blah }

This is especially so if the labelled function argument is not the final
parameter since then one cannot use $, you'd have to put the whole thing
in ()'s.

The labelled argument technique is one I think we should be making
greater use of (eg look at the proliferation of openFile variants) so I
don't think we should be changing the syntax to make it harder / uglier.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Haskell 2010: libraries

2009-07-14 Thread Duncan Coutts
On Tue, 2009-07-14 at 00:20 +0100, Ian Lynagh wrote:
 On Mon, Jul 13, 2009 at 09:56:50PM +0100, Duncan Coutts wrote:
  
  I'd advocate 4. That is, drop the ones that are obviously superseded.
  Keep the commonly used and uncontroversial (mostly pure) modules and
  rename them to use the new hierarchical module names.
  
  Specifically, I suggest:
  
   1. Ratio   keep as Data.Ratio
   2. Complex keep as Data.Complex
   3. Numeric keep as Numeric (?)
   4. Ix  keep as Data.Ix
   5. Array   keep as Data.Array
   6. Listkeep as Data.List
   7. Maybe   keep as Data.Maybe
   8. Charkeep as Data.Char
   9. Monad   keep as Control.Monad
  10. IO  keep as System.IO
  11. Directory   drop
  12. System  drop (superseded by System.Process)
  13. Timedrop
  14. Locale  drop
  15. CPUTime drop
  16. Random  drop
 
 We've been fortunate recently that, because the hierarchical modules
 haven't been in the standard, we've been able to extend and improve them
 without breaking compatibility with the language definition. In some
 cases, such as the changes to how exceptions work, we haven't had this
 freedom as the relevant functions are exposed by the Prelude, and that
 has been causing us grief for years.
 
 To take one example, since List was immortalised in the H98 report with
 104 exports, Data.List has gained an additional 7 exports:
 foldl'
 foldl1'
 intercalate
 isInfixOf
 permutations
 stripPrefix
 subsequences
 The last change (making the behaviour of the generic* functions
 consistent with their non-generic counterparts) was less than a year
 ago, and the last additions were less than 2.

Though also note that we have not changed any of the existing ones. Is
there a problem with specifying in the libraries section of the report
that the exports are a minimum and not a maximum?

 But to me, the most compelling argument for dropping them from the
 report is that I can see no benefit to standardising them as part of the
 language, rather than in a separate base libraries standard.

Some functions, especially the pure ones are really part of the
character of the language (and some are specified as part of the
syntax). We have not had major problems with the pure parts of the
standard libraries, our problems have almost all been with the system
facing parts (handles, files, programs, exceptions).

I don't see any particular problem with having some essential (in the
sense of being part of the essence of the language) libraries in the
main report and some separate libraries report in a year or two's time
standardising some of the trickier aspects of libraries for portable
programs to interact with the OS (addressing Malcolm's point about the
need for this so as to be able to write portable programs).

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Haskell 2010: libraries

2009-07-13 Thread Duncan Coutts
On Wed, 2009-07-08 at 15:09 +0100, Simon Marlow wrote:

 I'm mainly concerned with projecting a consistent picture in the Report, 
 so as not to mislead or confuse people.  Here are the options I can see:

   2. Just drop the obvious candidates (Time, Random, CPUTime,
  Locale, Complex?), leaving the others.
 
   3. Update the libraries to match what we have at the moment.
  e.g. rename List to Data.List, and add the handful of
  functions that have since been added to Data.List.  One
  problem with this is that these modules are then tied to
  the language definition, and can't be changed through
  the usual library proposal process.  Also it would seem
  slightly strange to have a seemingly random set
  of library modules in the report.
 
   4. Combine 2 and 3: drop some, rename the rest.

I'd advocate 4. That is, drop the ones that are obviously superseded.
Keep the commonly used and uncontroversial (mostly pure) modules and
rename them to use the new hierarchical module names.

Specifically, I suggest:

 1. Ratio   keep as Data.Ratio
 2. Complex keep as Data.Complex
 3. Numeric keep as Numeric (?)
 4. Ix  keep as Data.Ix
 5. Array   keep as Data.Array
 6. Listkeep as Data.List
 7. Maybe   keep as Data.Maybe
 8. Charkeep as Data.Char
 9. Monad   keep as Control.Monad
10. IO  keep as System.IO
11. Directory   drop
12. System  drop (superseded by System.Process)
13. Timedrop
14. Locale  drop
15. CPUTime drop
16. Random  drop

The slightly odd thing here is keeping System.IO but dropping the other
IO libs Directory and System. We obviously have to drop System, because
it's more or less a deprecated API and it's superseded by System.Process
(which we almost certainly do not want to standardise at this stage).

It'd be nice to have a clear dividing line of keeping the pure stuff and
dropping the bits for interacting with the system however we have to
keep System.IO since bits of it are re-exported through the Prelude
(unless we also trim the Prelude). The bits for interacting with the
system are of course exactly the bits that are most prone to change and
are most in need of improvement.

Another quirk is that we never changed the name of the Numeric module.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Haskell 2010: libraries

2009-07-13 Thread Duncan Coutts
On Mon, 2009-07-13 at 21:57 +0100, Duncan Coutts wrote:
 On Wed, 2009-07-08 at 15:09 +0100, Simon Marlow wrote:
 
  I'm mainly concerned with projecting a consistent picture in the Report, 
  so as not to mislead or confuse people.  Here are the options I can see:
 
2. Just drop the obvious candidates (Time, Random, CPUTime,
   Locale, Complex?), leaving the others.
  
3. Update the libraries to match what we have at the moment.
   e.g. rename List to Data.List, and add the handful of
   functions that have since been added to Data.List.  One
   problem with this is that these modules are then tied to
   the language definition, and can't be changed through
   the usual library proposal process.  Also it would seem
   slightly strange to have a seemingly random set
   of library modules in the report.

Another thing we can do here is specify that the contents of these
modules is a minimum and not a maximum, allowing additions through the
usual library proposal process.

4. Combine 2 and 3: drop some, rename the rest.
 
 I'd advocate 4. That is, drop the ones that are obviously superseded.
 Keep the commonly used and uncontroversial (mostly pure) modules and
 rename them to use the new hierarchical module names.

Oh and additionally include the FFI modules under their new names.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: what about moving the record system to an addendum?

2009-07-07 Thread Duncan Coutts
On Mon, 2009-07-06 at 18:28 -0700, John Meacham wrote:
 Well, without a replacement, it seems odd to remove it. Also, Haskell
 currently doesn't _have_ a record syntax (I think it was always a
 misnomer to call it that) it has 'labeled fields'. None of the proposed
 record syntaxes fit the same niche as labeled fields so I don't see them
 going away even if a record syntax is added to haskell in the future.

The people proposing this can correct me if I'm wrong but my
understanding of their motivation is not to remove record syntax or
immediately to replace it, but to make it easier to experiment with
replacements by making the existing labelled fields syntax a modular
part of the language that can be turned on or off (like the FFI).

I'm not sure that I agree that it's the best approach but it is one idea
to try and break the current impasse. It seems currently we cannot
experiment with new record systems because they inevitably clash with
the current labelled fields and thus nothing changes.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: A HERE Document syntax

2009-04-24 Thread Duncan Coutts
On Wed, 2009-04-22 at 20:52 -0700, Jason Dusek wrote:
 The conventional HERE document operator -- `` -- is not a
   good fit for Haskell. It's a perfectly legal user-level
   operator. I'd like to propose the use of backticks for HERE
   documents.

Just to say that I think this proposal is definitely worth considering.

I've not looked at all the details yet, but we should. See also some
comments on reddit:
http://www.reddit.com/r/haskell/comments/8ereh/a_here_document_syntax/


Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Outlaw tabs

2009-01-24 Thread Duncan Coutts
On Sat, 2009-01-24 at 00:35 +0100, Achim Schneider wrote:
 I guess everyone knows why.

Can I recommend Ian's Good Haskell Style

http://urchin.earth.li/~ian/style/haskell.html

We should have it linked/published more widely. The Vim mode that it
links to is also excellent. We should try and get it ported to the
Haskell emacs mode.

As others have also pointed out adding ghc-options: -fwarn-tabs to a
project .cabal file is a good way to stop them creeping back in.

A lot of projects use -Wall. If there is consensus on the tabs issue
then we could ask for -fwarn-tabs to be included in -Wall. That should
be a good first step. If we cannot achieve consensus within the
community for having -Wall include -fwarn-tabs then we have no hope of
banning them in the language definition.

So lets first test test the waters on that proposal.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


RE: Repair to floating point enumerations?

2008-10-15 Thread Duncan Coutts
On Wed, 2008-10-15 at 11:25 +0100, Mitchell, Neil wrote:
 Hi Malcolm,
 
 The current behaviour does sound like a bug, and the revised behaviour
 does sound like a fix - and one that has a sensible explanation if
 people trip over it. In general having floating point be a member of
 classes such as Eq has some obvious problems, but I realise is a
 necessity for practical programming. Given that we have Eq Double, then
 Enum Double seems no worse.
 
 If we don't alter H98 then a valid program under H98 vs H' will give
 different results without any warning - that seems really bad. In
 addition, having two instances around for one typeclass/type pair in a
 big library (base) which are switched with some flag seems like a
 nightmare for compiler writers. So I think a good solution would be to
 fix H98 as a typo, and include it in H'.

I would take the contrary position and say H98 should be left alone and
the change should be proposed for H'.

The argument is that H98 is done and the revised report was published 7
years ago. Changing H98 now just doesn't seem to make much sense.

If we're talking about changing the meaning of 'valid' programs then
doing that at a boundary like H98 - H' seems much more sensible than
having to explain the difference in H98-pre2008 programs vs
H98-post2008.

People will not expect all programs to be exactly the same between H98
and H' (otherwise there would be little need for a new standard). Yes H'
is mostly compatible extensions but not all of it.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Mutually-recursive/cyclic module imports

2008-08-17 Thread Duncan Coutts
On Sat, 2008-08-16 at 13:51 -0400, Isaac Dupree wrote:
 Duncan Coutts wrote:
  [...]
  
  I'm not saying it's a problem with your proposal, I'd just like it to be
  taken into account. For example do dependency chasers need to grok just
  import lines and {-# SOURCE -#} pragmas or do they need to calculate
  fixpoints.
 
 Good point.  What does the dependency chaser need to figure out?
 - exactly what dependency order files must be compiled 
 (e.g., ghc -c) ?
 - what files (e.g., .hi) are needed to be findable by the 
 e.g. (ghc -c) ?
 - recompilation avoidance?

It needs to work out which files the compiler will read when it compiles
that module.

So currently, I think we just have to read a single .hs file and
discover what modules it imports. We then can map those to .hi
or .hs-boot files in one of various search dirs or packages.

We also need to look at {#- SOURCE #-} import pragmas since that means
we look for a different file to ordinary imports.

Calculating dependency order and recompilation avoidance are things the
dep program has to do itself anyway. The basics is just working out what
things compiling a .hs file depends on. Obviously it's somewhat
dependent on the Haskell implementation.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Polymorphic strict fields

2007-05-01 Thread Duncan Coutts
On Mon, 2007-04-30 at 19:47 -0700, Iavor Diatchki wrote:

 All of this leads me to think that perhaps we should not allow
 strictness annotations on polymorphic fields.  Would people find this
 too restrictive?

Yes.

Our current implementation of stream fusion relies on this:

data Stream a = forall s. Unlifted s =
  Stream !(s - Step a s)  -- ^ a stepper function
 !s-- ^ an initial state

We use strictness on polymorphic (class constrained) fields to simulate
unlifted types. We pretend that the stream state types are all unlifted
and have various strict/unlifted type constructors:

data (Unlifted a, Unlifted b) = a :!: b = !a :!: !b
instance (Unlifted a, Unlifted b) = Unlifted (a :!: b) where ...


Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: defaults

2006-11-29 Thread Duncan Coutts
On Thu, 2006-11-30 at 12:21 +1100, Bernie Pope wrote:

 A compromise is to turn defaulting off by default. This would mean
 that if you want defaulting you have to ask for it. The question then  
 would be:
 does defaulting get exported across module boundaries? I would be  
 inclined to say no, but there may be compelling arguments for it.

If it's on a per-class basis then it would be reasonable to allow the
default to be exported, no? Then you just have to argue about if the
default Prelude should export defaults for the Num and other standard
classes.

For my GUI OOP inheritance hierarchy use-case I'd certainly want to
export the default. It'd be pretty useless otherwise (I don't mean in
general, just for this use-case).

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: defaults

2006-11-27 Thread Duncan Coutts
On Mon, 2006-11-20 at 12:05 +, Malcolm Wallace wrote:
 Prompted by recent discussion on the Hat mailing list about the problems
 of type-defaulting, I have added two new proposals for this issue to the
 Haskell-prime wiki at:
 
 http://hackage.haskell.org/trac/haskell-prime/wiki/Defaulting
 
 The main new proposal is a different way of specifying defaults, which
 includes the name of the class being defaulted (thus allowing
 user-defined classes to play this game), but unlike the original
 proposal, permits only one type to be specified per class.  The rules
 are therefore also simpler, whilst still (I believe) capturing the
 useful cases.

BTW, just to add another use case for allowing defaulting on any class:

One way to model OO class hierarchies (eg used in GUI libs like Gtk2Hs)
is by having a data type and a class per-object:

data Widget = ...

class WidgetClass widget where
  toWidget :: widget - Widget --safe upcast

instance WidgetClass Widget

Then each sub-type is also an instance of the class, eg a button:

data Button = ...
class ButtonClass button where
  toButton :: button - Button

instance WidgetClass Button
instance ButtonClass Button

etc.

So Widget is the canonical instance of WidgetClass.

Actually having this defaulting would be very useful. Suppose we want to
load a widget from a decryption at runtime (and we do, visual builders
like glade give us this). We'd have a functions like this:

xmlLoad :: WidgetClass widget = FilePath - IO (String - widget)

So the action loads up an xml file and gives us back a function which we
can use to lookup named widgets from the file. Of course to make this
type safe we need to do a runtime checked downcast from Widget to the
specific widget type we wish to use. For example:

getWidget - xmlLoad Foo.glade
let button1 :: Button
button1 = getWidget button1

It would be nice not to have to annotate that button1 :: Button. However
often it would be necessary to do so because almost all operations on
the Button type are actually generic on the ButtonClass (since there are
some sub-types of Button). So actually we'd only constrain button1 to be
an instance of ButtonClass. So we'd end up with ambiguous overloading -
just like we get with (show . read). However if we could default it to
be actually of type Button then it should all work just fine.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Small note regarding the mailing list

2006-09-02 Thread Duncan Coutts
On Sat, 2006-09-02 at 12:45 -0700, isaac jones wrote:
 On Tue, 2006-08-29 at 14:04 +0200, Christophe Poucet wrote:
  Hello,
  
  Just a small request.  Would it be feasible to tag the Haskell-prime
  list in a similar manner as Haskell-cafe?
 
 I'd rather not.  If you want to be able to filter, you can use the
 Sender field which will always be:
 Sender: [EMAIL PROTECTED]

It's also got all the normal list-id: header which is the most
reliable way to identify it (and indeed all other mailing lists I've
ever seen).

Many email progs have special support for filter rules based on the
mailing list headers, eg evolution.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: map and fmap

2006-08-14 Thread Duncan Coutts
On Mon, 2006-08-14 at 20:55 +0100, Jon Fairbairn wrote:
 On 2006-08-14 at 12:00PDT Iavor Diatchki wrote:
  Hello,
  I never liked the decision to rename 'map' to 'fmap', because it
  introduces two different names for the same thing (and I find the name
  `fmap' awkward).
 
 I strongly concur. There are far too many maps even without
 that, and having two names for the same thing adds to the
 confusion.

If it goes in that direction it'd be nice to consider the issue of
structures which cannot support a polymorphic map. Of course such
specialised containers (eg unboxed arrays or strings) are not functors
but they are still useful containers with a sensible notion of map.

The proposals to allow this involve MPTCs where the element type is a
parameter. That allows instances which are polymorphic in the element
type or instances which constrain it.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: FFI proposal: allow some control over the scope of C header files

2006-04-23 Thread Duncan Coutts
On Sun, 2006-04-23 at 17:26 -0400, Manuel M T Chakravarty wrote:
 Duncan Coutts:
  On Fri, 2006-04-21 at 09:32 -0400, Manuel M T Chakravarty wrote:
  
I think we'd want to be able to specify that a C header file not
escape a module boundary and probably we'd also want to be able to ask
that it not escape a package boundary (though this may be beyond the H'
spec since Haskell does not talk about packages).
   
   The H98 standard already specifies a NOINLINE pragma for any function:
   
 http://haskell.org/onlinereport/pragmas.html
   
   The simplest solution is to ensure that all Haskell compilers implement
   this pragma properly for foreign imported functions.  If you want finer
   control over where inlining takes place, then maybe the pragma should be
   extended to provide that finer control.
  
  I don't think we need to generalise the problem to all function
  inlinings. There are specific practical problems caused by inlining
  foreign calls that are not a problem for ordinary Haskell functions.
 
 Inlining of foreign functions causes extra problems, but generally
 inlining is a concern; so, if we can use the same mechanisms, we get a
 simpler language.

True, though with a special case mechanism we can make automatic checks
possible/easier.

   Besides, the standard so far doesn't cover command line options at all.
   So, there is the more general question of whether it should.
  
  I don't think we need to specify the command line interface. The
  required headers can be put in the module.
 
 That's ok with me.  I was just pointing out that many of the problems
 and/or lack of understanding of users that we are seeing has to do with
 the use of command line options.  We simply cannot address this unless
 the standard covers command line options.

Under my hypothetical scheme the ghc command line method would be
equivalent to putting it in the module and could be checked the same
way.

  What I really want is for the issue of header scope to be something that
  can be checked by the compiler. As a distro packager I see far too many
  people getting it wrong because they don't understand the issue. If we
  could declare the intended scope of the header files then 1. people
  would think about and 2. if they got it wrong it'd be checkable because
  the compiler would complain.
 
 Whether or not the compiler can check for wrong use, seems to me
 independent of whether we use inline pragmas or any other syntax.  GHC
 could very well check some of these things today.  It just doesn't.  Do
 you propose to make such checks mandatory in the standard?

That'd be nice, though I can see that it is more work.

 We are having two issues here:
 
 (1) Specification of which functions need what headers and whether 
 these functions can be inlined.
 (2) Let the compiler spot wrong uses of header files.
 
 These two issues are largely independent.

Yes, ok.

   Re (1), I dislike new syntax
 (or generally any additions to the language) and prefer using existing
 mechanisms as far as possible.  The reason is simply that Haskell is
 already very complicated.  Haskell' will be even more complicated.
 Hence, we must avoid any unnecessary additions.

Sure.

 Re (2), I am happy to discuss what kind of checks are possible, but I am
 worried that it'll be hard to check for everything without assistance
 from cabal, which I don't think will be part of Haskell'.

I think it can be checked without cabal. Outline: suppose we use a
module level granularity (I know jhc proposes to use a finer
granularity) so we track which C header files are needed to compile
which modules.

A FFI decl specifying a header file makes that module need that header.
Then transitively each module that imports that module needs that header
too. We can only stop the header leaking out of the module/package by
specifying NOINLINE on the imported function (or using some additional
syntax s I originally suggested).

So now it's easy to check what headers are needed to to compile any
module. Then we probably need to rely on an external mechanism (eg cabal
or the user) to make sure that all these headers are available - but at
least we can check that the user has done it right.

So it's at this point that issue (1)  (2) become related. If we say
that a header file transitively infects every client module then it
effectively bans private header files and so we need some mechanism to
limit the scope of header files to allow them again (like NOINLINE).

Eg with c2hs, I think that in theory we should be installing every .h
file that c2hs generates for each module with the library package. I've
never seen anyone actually do that (Cabal's c2hs support doesn't do that
for example).

 Re the concern about wrong use: FFI programming is a minefield.  We will
 never be able to make it safe.  So, I am reluctant to complicate the
 language just to make it (maybe) a little safer.  What IMHO will be far
 more effective is a good tutorial on FFI

Re: FFI proposal: allow some control over the scope of C header files

2006-04-21 Thread Duncan Coutts
On Fri, 2006-04-21 at 09:32 -0400, Manuel M T Chakravarty wrote:

  I think we'd want to be able to specify that a C header file not
  escape a module boundary and probably we'd also want to be able to ask
  that it not escape a package boundary (though this may be beyond the H'
  spec since Haskell does not talk about packages).
 
 The H98 standard already specifies a NOINLINE pragma for any function:
 
   http://haskell.org/onlinereport/pragmas.html
 
 The simplest solution is to ensure that all Haskell compilers implement
 this pragma properly for foreign imported functions.  If you want finer
 control over where inlining takes place, then maybe the pragma should be
 extended to provide that finer control.

I don't think we need to generalise the problem to all function
inlinings. There are specific practical problems caused by inlining
foreign calls that are not a problem for ordinary Haskell functions.

 Besides, the standard so far doesn't cover command line options at all.
 So, there is the more general question of whether it should.

I don't think we need to specify the command line interface. The
required headers can be put in the module.

  So some syntax off the top of my head:
  
  foreign import cheader module-local foo/bar.h
  
  I think there are 3 possibilities for the C header escape/scope setting
  (which should probably be manditory rather than optional):
  module-local
  package-local (extension for compilers that have a notion of a package)
  global
 
 Is this additional complexity really necessary or would the use of
 NOINLINE pragmas not suffice?  It's really in a library context where
 you want to restrict the inlining of foreign functions, but there the
 foreign functions are probably not much used inside the library itself,
 but mainly exported, so I doubt that you would get much of a performance
 loss by just tagging all foreign imported functions that you don't want
 to escape as NOINLINE.

What I really want is for the issue of header scope to be something that
can be checked by the compiler. As a distro packager I see far too many
people getting it wrong because they don't understand the issue. If we
could declare the intended scope of the header files then 1. people
would think about and 2. if they got it wrong it'd be checkable because
the compiler would complain.

As it is at the moment people don't know they're doing anything dodgy
until some user of their package gets a mysterious gcc warning and
possibly a segfault.

If we just tell everyone that they should use NOINLINE then they won't
and they'll still get it wrong.

The reason for some specific syntax rather than using NOINLINE is that
the compiler will be able to track the header files needed by each
module. So we can avoid the situation where a call gets made outside the
scope of its defining header file - either by automatically #including
the header file in the right place, or by complaining if the user does
not supply the header (eg by putting it in the .cabal file).

So it's not the general issue of inlining but the specific problem of
what C header files are required to compile what modules.

The ideal situation I imagine is that the scope of the headers can be
checked automatically so that the compiler or cabal will complain to a
library author that their private header file needs to be marked as
local to the package/module or included in the library package file and
installed with the package.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Pragmas for FFI imports

2006-02-21 Thread Duncan Coutts
On Mon, 2006-02-20 at 19:59 -0800, John Meacham wrote:
 On Fri, Feb 17, 2006 at 01:45:27AM +0200, Einar Karttunen wrote:
  I would like to propose two pragmas to be included in Haskell'
  for use with FFI. One for specifying the include file defining
  the foreign import (INCLUDE in ghc) and an another for defining
  a library that the foreign import depends on, called FFI_LIB
  (not implemented at the moment). These changes would not break
  any existing code.
 
 Just to expand on this, Einar is working on adding this support to jhc
 right now in his work on the library system in jhc. the semantic we
 decided on was that an
 
 {-# INCLUDE foo.h #-}

I'd just like to note that this shouldn't be the only way to specify
this info. For many real FFI bindings we don't know the right search
paths, defines, libs, lib link-time search paths, lib runtime search
paths etc until we start configuring on the target system. (Though we do
almost always know statically the names of the header files). This
information is often obtained from pkg-config and other similar
foo-config programs.

For example:
$ pkg-config --cflags --libs gtk+-2.0

-I/usr/include/gtk-2.0 -I/usr/lib64/gtk-2.0/include
-I/usr/include/atk-1.0 -I/usr/include/cairo -I/usr/include/pango-1.0
-I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include  -lgtk-x11-2.0
-lgdk-x11-2.0 -latk-1.0 -lgdk_pixbuf-2.0 -lm -lpangocairo-1.0
-lpango-1.0 -lcairo -lgobject-2.0 -lgmodule-2.0 -ldl -lglib-2.0

This information is not universally static. It depends on the machine
were looking at. So it can't be embedded in .hs files. Current practise
is to grok the output of pkg-config and generate the .cabal file at
configure time. Cabal then passes all this info to ghc on the command
line.


So what I'm saying is that it'd be nice to standardise these simple
include pragmas, but I think it'd be more useful to think about
something to help with the bigger cases. Not that I am necessarily
suggesting we standardise the compiler command line syntax but this is
an important (for practical  portability purposes) and yet
under-specified part of the FFI spec.

Perhaps it's not a problem because we can say that Cabal just knows
the compiler-specific methods of supplying this information to each
Haskell implementation that Cabal supports. But it's certainly arguable
that this is unsatisfactory.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Parallel list comprehensions

2006-02-04 Thread Duncan Coutts
On Sat, 2006-02-04 at 15:11 +0100, John Hughes wrote:
 I noticed ticket #55--add parallel list comprehensions--which according to
 the ticket, will probably be adopted. I would argue against.

Can I second this?

The only time I ever used a parallel list comprehension was by accident.
I accidentally used '|' rather than ',' in a list comprehension and
ended up with a bug that was quite hard to track down.

Now one could argue that I could make a similar mistake with pretty much
any language feature, but it's precisely because it's a rarely used
language feature that it makes this problem worse because you're not
looking for that kind of problem.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Priorities

2006-02-02 Thread Duncan Coutts
On Thu, 2006-02-02 at 11:38 +0100, John Hughes wrote:

 One such tool is wxHaskell--named by 19% of Haskell users in my survey,
 it's the de facto standard GUI toolkit. wxHaskell makes essential use of
 existential types in its interface, a strong reason for including them in
 Haskell'. It also uses multi-parameter classes and functional dependencies,
 although much less heavily.

My priorities for Gtk2Hs (the second most popular GUI toolkit in John
Hughes's survey) are very similar. We have adopted wxHaskell's style of
attributes which is what uses existential types. I should not that in
both cases the use of existential types is not essential. It was done
that way only because the symbol that people wnated to use ':=' happens
to be a constructor operator. If '=:' were used instead then existential
construcotr types would not be necessary. We might make use of MPC
+FunDeps if they were standard but it is not at all crucial.

Our main concern is namespace issues. As I've said before, Gtk2Hs is a
very large library but with the current module system it must export a
flat name space. wxHaskell solves this with somewhat of a hack. It
defines many multi-parameter type classes to allow the same name to be
used by different widgets (with hopefully the same general meaning). I
think this would be better solved by using qualified names.

We have been trying hard to avoid non-H98isms so that we can hope to
work with compilers other than ghc. So having these features
standardised would allow us to present a better api and allow us to
remain portable.

 What other tools must be supported by Haskell'? What other extensions
 must be present to support them? What issues remain before those
 extensions are standardized, and thus frozen in stone for many years to 
 come?

Another gripe is in the FFI, in the handling of include files. The
method preferred by GHC and the method preferred by the FFI spec are
rather at odds. GHC prefers the file to be specified on the command line
while the FFI spec prefers it to be specified in each and every FFI
import declaration. If one does the latter then GHC refuses to inline
foreign calls across modules.

Other mundane but useful things include the ability to specify different
FFI import decls for different platforms without using #ifdef's. This
would allow a program using a Gtk2Hs GUI to be compiled to bytecode with
YHC and run on different platforms, rather than having to be built
differently on each platform. (The issue is that even for portable C
libs, the calling convention and symbol names can differ across
platforms. This is true to a minor degree with the Gtk+ library.)

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Existential types: want better syntactic support (autoboxing?)

2006-01-31 Thread Duncan Coutts
On Tue, 2006-01-31 at 13:59 +0100, Wolfgang Jeltsch wrote:
 Am Montag, 30. Januar 2006 19:02 schrieb Duncan Coutts:
  [...]
 
  I have often thought that it would be useful to have an existential
  corresponding to a class.
 
 How would this work with multi-parameter classes, constructor classes, etc.? 
 If you propose something that only works in conjunction with a special kind 
 of classes I would hesitate to include such thing in a Haskell standard.

As John Mecham said it'd be for single parameter type class with a
parameter of kind *.

But you're probably right that people should get more experience with
using this technique before giving special support in the language to
make it convenient.

As Bulat noted we can already use this construction:

class (Monad m) = Stream m h | h-m where
vClose :: h - m ()
vIsEOF :: h - m Bool
.

data Handle = forall h . (Stream IO h) = Handle h

instance Stream IO Handle where
vClose(Handle h) = vCloseh
vIsEOF(Handle h) = vIsEOFh
.

But we have to give the name of the most general instance a different
name to the class which is rather inconvenient.

So perhaps we should start with allowing a class a data type to have the
same name and in a future standard think about making it easy to define
Bulat's Handle instance above with a short hand like:

class (Monad m) = Stream m h | h-m where
vClose :: h - m ()
vIsEOF :: h - m Bool
.
  deriving data Stream


I have to say though that I am surprised that us Haskell folk are not
more interested in making it easy or even possible to have abstract
values accessed via interfaces. Classes make it easy and elegant to have
type based dispatch but for the few times when value based dispatch
really is necessary it's a pain. The fact that we've suffered with a
non-extensible abstract Handle type for so long is an example of this.

Duncan

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime