Re: Proposal: NoImplicitPreludeImport

2013-06-04 Thread Manuel M T Chakravarty
Ian Lynagh i...@well-typed.com:
 On Tue, Jun 04, 2013 at 01:15:58PM +1000, Manuel M T Chakravarty wrote:
 
 If a module contains an import of the form
 
  import Prelude.XYZ
 
 then it also automatically uses the NoImplicitPrelude language pragma. 
 Otherwise, the Prelude remains to be implicitly defined as before.
 
 What about these?:
 
import Prelude.XYZ as Foo

In that case, I think, we should also have NoImplicitPrelude, but in case of

import qualified Prelude.XYZ as Foo

they might to explicitly want to avoid clashes with the implicit Prelude. This 
would be an argument to not have NoImplicitPrelude in this case. On the other 
hand, simpler is better; so, maybe it shouldn't depend on the way a 
'Prelude.XYZ' module is imported and we should use NoImplicitPrelude regardless.

import Foo as Prelude.XYZ

I would say that doesn't qualify for having NoImplicitPrelude, but I don't feel 
strongly about that.

Summary
~~~

If and only if a module has at least one impdecl of the form

  'import' ['qualified'] Prelude.XYZ ['as' modid] [impspec]

then this implies {-# LANGUAGE NoImplicitPrelude #-}.

That is a simple rule with no surprises.

Manuel




___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: NoImplicitPreludeImport

2013-06-03 Thread Manuel M T Chakravarty
I agree with Bryan. Such an invasive change should have a great payoff.

Simon Marlow (in a different discussion) proposed the following (IMO much 
better) idea: 

If a module contains an import of the form

  import Prelude.XYZ

then it also automatically uses the NoImplicitPrelude language pragma. 
Otherwise, the Prelude remains to be implicitly defined as before.

This simplifies using an alternative Prelude with no cost for modules that do 
not make use of that feature.

Manuel

Bryan O'Sullivan b...@serpentine.com:
 On Tue, May 28, 2013 at 8:23 AM, Ian Lynagh i...@well-typed.com wrote:
 I have made a wiki page describing a new proposal,
 NoImplicitPreludeImport, which I intend to propose for Haskell 2014:
 http://hackage.haskell.org/trac/haskell-prime/wiki/NoImplicitPreludeImport
 
 What do you think?
 
 This is a truly terrible idea.
 
 It purports to be a step towards fixing the backwards compatibility problem, 
 but of course it breaks every module ever written along the way, and it means 
 that packages that try to be compatible across multiple versions of GHC will 
 need mandatory CPP #ifdefs for years to come.
 
 The current model that we have, of opting out of the Prelude explicitly, 
 provides the same capability without damning the entire Haskell world to a 
 months-long edit-recompile cycle.
 
 Of course being able to evolve the language and its libraries is important, 
 but experience from other languages (e.g. Python, Ruby) offer the lesson that 
 ignoring the if it ain't broke, don't fix it rule is very perilous: it 
 risks hobbling language growth and development for years. This proposal takes 
 that rule and maximally flouts it, while offering scant payoff in return.
 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: Scoping rule change

2012-07-25 Thread Manuel M T Chakravarty
Nitpick: Your example actually does not lead to an error with GHC, as you 
define, but do not use 'foo' in M. Names (like classes) only clash when you 
look them up.

Manuel

Lennart Augustsson lenn...@augustsson.net:
 It's not often that one gets the chance to change something as
 fundamental as the scoping rules of a language.  Nevertheless, I would
 like to propose a change to Haskell's scoping rules.
 
 The change is quite simple.  As it is, top level entities in a module
 are in the same scope as all imported entities.  I suggest that this
 is changed to that the entities from the module are in an inner scope
 and do not clash with imported identifiers.
 
 Why?  Consider the following snippet
 
 module M where
 import I
 foo = True
 
 Assume this compiles.  Now change the module I so it exports something
 called foo.  After this change the module M no longer compiles since
 (under the current scoping rules) the imported foo clashes with the
 foo in M.
 
 Pros: Module compilation becomes more robust under library changes.
 Fewer imports with hiding are necessary.
 
 Cons: There's the chance that you happen to define a module identifier
 with the same name as something imported.  This will typically lead to
 a type error, but there is a remote chance it could have the same
 type.
 
 Implementation status: The Mu compiler has used the scoping rule for
 several years now and it works very well in practice.
 
   -- Lennart
 
 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: Scoping rule change

2012-07-25 Thread Manuel M T Chakravarty
If Lennart's suggestion is combined with GHC's lazy checking for name clashes 
(i.e., only check if you ever look a name up in a particular scope), it would 
also work in your example.

Manuel

Sittampalam, Ganesh ganesh.sittampa...@credit-suisse.com:
 If you’re using unqualified and unrestricted imports, there’s still the risk 
 that another module will export something you care about, e.g.
  
 module M where
 import I  -- currently exports foo
 import J  -- might be changed in future to export foo
  
 … foo …
  
 So I think you need to use import lists or qualified anyway to avoid any risk 
 of future name clashes – given that, does this change buy much?
  
 From: haskell-prime-boun...@haskell.org 
 [mailto:haskell-prime-boun...@haskell.org] On Behalf Of Lennart Augustsson
 Sent: 24 July 2012 02:29
 To: Haskell Prime
 Subject: Proposal: Scoping rule change
  
 It's not often that one gets the chance to change something as
 fundamental as the scoping rules of a language.  Nevertheless, I would
 like to propose a change to Haskell's scoping rules.
  
 The change is quite simple.  As it is, top level entities in a module
 are in the same scope as all imported entities.  I suggest that this
 is changed to that the entities from the module are in an inner scope
 and do not clash with imported identifiers.
  
 Why?  Consider the following snippet
  
 module M where
 import I
 foo = True
  
 Assume this compiles.  Now change the module I so it exports something
 called foo.  After this change the module M no longer compiles since
 (under the current scoping rules) the imported foo clashes with the
 foo in M.
  
 Pros: Module compilation becomes more robust under library changes.
 Fewer imports with hiding are necessary.
  
 Cons: There's the chance that you happen to define a module identifier
 with the same name as something imported.  This will typically lead to
 a type error, but there is a remote chance it could have the same
 type.
  
 Implementation status: The Mu compiler has used the scoping rule for
 several years now and it works very well in practice.
  
   -- Lennart
  
 
 ==
 Please access the attached hyperlink for an important electronic 
 communications disclaimer:
 http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html
 ==
 
 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Strongly Specify Alignment for FFI Allocation

2009-09-24 Thread Manuel M T Chakravarty

Thomas DuBuisson:

Aside from section 5.7 (storable) and comments on 'alignPtr', the only
mention of alignment in the FFI addendum is on
mallocBytes/allocaBytes:

The block of memory is sufficiently aligned for any of the basic
foreign types (see Section 3.2) that fits into a memory block of the
allocated size

It would be beneficial if this wording was applied to all allocation
routines - such as mallocForeignPtrBytes, mallocForeignPtrArray, etc.
For the curious, this proposal was born from the real-world issue of
pulling Word32's from a ByteString in an efficient but portable manner
(binary is portable but inefficient, a straight forward
unsafePerformIO/peek is efficient but need alignment).


I agree that we should be more precise here.


If no glaring issue comes up then I'll formalize / make a ticket,


Can you please summarise the exact additions that you would like to  
see as a follow-up email?  I will collect all changes that we want to  
make to the existing FFI Addendum before it goes into the 2009 issue  
of Haskell'.


Cheers,
Manuel



___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Haskell 2010: libraries

2009-07-11 Thread Manuel M T Chakravarty

Ross Paterson:

On Wed, Jul 08, 2009 at 03:09:29PM +0100, Simon Marlow wrote:

1. Just drop the whole libraries section from the report.  The
   Report will still define the Prelude, however.

   There will be some loose ends where the rest of the report
   refers to entities from these libraries, e.g. the Prelude
   refers to Rational from the Ratio library.  We just have to
   fix up these references, moving the appropriate definitions
   into the Report as necessary.


Some of the loose ends:

The defaulting rules (section 4.3.4) apply to any class defined in  
the

Prelude or a standard library.  The non-Prelude classes involved are
Ix and Random.

The FFI spec refers to types Int8, Int16, Int32, Int64, Word8, Word16,
Word32, Word64, Ptr a, FunPtr a and StablePtr a.  Perhaps they  
should move
to the Prelude when the non-library part of the FFI spec is  
incorporated

into the Report?


If we have these types in the Prelude, the associated functions should  
be in the Prelude, too, and I'd be reluctant to include operations  
that are not memory-safe in the Prelude.  So, I think, we need at  
least a standard library for the FFI.  (In the FFI spec, we after all  
went to a lot of trouble to realise as much as possible of the needed  
functionality as libraries, to change the core language as little as  
possible.)


I understand the desire to cut down on the number of library functions  
defined in the report, but ultimately, the language needs to provide a  
basic set of functionality that is the basis for implementing all the  
other libraries.  Otherwise, the usefulness of the standard gets  
undermined.


Apart from the Prelude, I think we should ask the following question  
to decide whether we can omit some library functionality from the  
language definition:


  If we omit the functionality under consideration,
  can we implement it in a portable manner with what remains in the  
definition?


If that is not the case, we ought to include it.

Manuel

PS: As a historical anecdote, it was a major shortcoming of Modula-2  
over C that Modula-2 didn't define it's basic libraries properly with  
the language (whereas C did).

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Haskell 2010: libraries

2009-07-10 Thread Manuel M T Chakravarty

Simon Marlow:

On 08/07/2009 22:45, Ian Lynagh wrote:

On Wed, Jul 08, 2009 at 03:09:29PM +0100, Simon Marlow wrote:

 1. Just drop the whole libraries section from the report.  The
Report will still define the Prelude, however.

I'm tending towards (1), mainly because it provides a clean break  
and is
likely to be the least confusing for users: they have one place to  
go

looking for library documentation.


Sounds good to me.

See also http://hackage.haskell.org/trac/haskell-prime/ticket/118


Ian, would you like to take ownership for this proposal, and start  
fleshing out the details in a wiki page?


There seems to be support for removing all the libraries in the  
report.  Whether the report also blesses either the Haskell Platform  
or a set of packages is a separate matter; either way, we still have  
to extract the existing libraries from the report, and there will be  
a set of changes to the report necessary to make that happen.  The  
Report should explicitly list all the library entities that it  
refers to.


I don't mind defining libraries separately, but not defining them at  
all is problematic unless a core set of libraries isn't rigorously  
defined somewhere else.


Manuel

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Newtype unwrapping in the FFI

2009-02-12 Thread Manuel M T Chakravarty

Simon Peyton-Jones:
|* Clarify the spec to say that a newtype can only be  
automatically

|  unwrapped if the newtype constructor (MkN in this case) is in
|  scope
|
| I agree up to here. For user-defined types, not exporting the
| constructor should be a guarantee of abstraction.


I also don't really see an alternative.  The ability to pass a type to  
a foreign function tells us something about the type; there is not  
much we can do about that.



|  It happens that a large set of types from Foreign.C.Types, such as
|  CInt and friends, are exported without their constructors, so  
adopting
|  this new rule would require us to change Foreign.C.Types to  
expose the
|  representation of CInt and friends.  (As it happens, nhc  
requires this

|  already, so there's some #ifdeffery there already.)
|
| The thing about CInt though is that it is supposed to be abstract  
*and*
| an FFI type. I want to think of it as a primitive FFI type (though  
it is
| not a basic type as defined by the FFI). We don't want to know  
that on
| some system it is Int32 and on others it is Int64. We do not want  
access

| to the constructor here.

Trouble is, there are a zillion types in Foreign.C.Types, and  
another zillion in Foreign.Posix.Types. Do you want to list them all  
as blessed in the FFI addendum?


The types in Foreign.C.Types are already listed in the FFI addendum  
(it actually specifies the whole module, just like the report  
specifies what is in the Prelude).


However, I think an argument can be made that hiding the  
implementation of the types in C.Foreign.Types is a Bad Thing.  FFI  
code is inherently architecture dependent due to the varying sizes and  
representations of C types.  Hiding that variation behind a newtype  
serves only to obscure that dependency, it certainly doesn't resolve  
it.  I know that Section 6.2 of the FFI addendum requires these  
exports to be abstract, but I now tend to think that this was a mistake.


Manuel

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: patch applied (haskell-prime-status): add Make $ left associative, like application

2008-04-27 Thread Manuel M T Chakravarty

Sittampalam, Ganesh:

Manuel Chakravarty wrote:

We should be careful about where we break existing code, and
we should try to support automatic translation of H98 to H' code,
but any changes that we do not make now will become even more
difficult in the future when there is even more Haskell code.
Look at what is happening now already, industrial users applying
pressure on the committee to not change the language too much
for the sake of legacy code.  A clear indication that anything
we don't change now, we will have to live with forever.


I wasn't arguing for special treatment as an industrial user,
just listing one datapoint that I have to counter any impression
that the only or main cost to the community as a whole is fixing
what's on hackage.


I agree with that.  However, maybe somewhat paradoxically, I think,  
given the resistance that changes to the language already invoke now,  
we should actually be fairly aggressive with changes this one time  
(ie, in Haskell').



Hence, anything that is *important* to change, we should change now.


Agreed. It's just in this case the pain of changing will be huge and
the benefits marginal at best.


Yes, I was not arguing for that particular change, my comment was of a  
general nature.



We should mitigate the pain by having a H98 to H' translator


Such a translator would have to maintain existing layout etc, and
produce reasonably nice looking code in places where it makes changes.
Do we have any infrastructure that would make writing one easy?


For H98, simple[1] changes might be possible with haskell-src if it  
would be modified to be able to preserve comments and layout.


Manuel

[1] For example, purely syntactic ones.
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Meta-point: backward compatibility

2008-04-23 Thread Manuel M T Chakravarty

Simon Marlow:

Johan Tibell wrote:

An interesting question. What is the goal of Haskell'? Is it to, like
Python 3000, fix warts in the language in an (somewhat) incompatible
way or is it to just standardize current practice? I think we need
both, I just don't know which of the two Haskell' is.

[..]
As for the particular question of backwards-incompatible changes,  
here are some criteria that Henrik Nilsson proposed early on, and I  
think are still relevant (i'm sure he won't mind my reposting these  
from the committee mailing list):


* If a proposed change breaks backwards compatibility, then it is
  acceptable only if either

  - very little existing code is likely going to be broken in
practice, or
  - + it is widely agreed that not addressing the issue really
  would harm the long-term relevance of Haskell', and
+ it is widely agreed that attempting to maintain backwards
  compatibility would lead to an unwieldy language design, and
+ the proposed design and its implications are well understood,
  i.e. it has been implemented in at least one system and it has
  been used extensively, or a strong argument can be made on
  the grounds of, say, an underlying well-understood theory.


As I have argued before on the committee list, I also think we should  
*not* worry about backwards incompatible changes too much in cases  
where a simple automatic translation from H98 to H' code is possible.   
Even for a large project, it is IMHO no big hardship to run a H98-H'  
translator over all Haskell sources.  After all, this is only needed  
for active projects that want to make use of H'.  For old code, I  
expect that compilers will still provide a -XHaskell98 flag or similar  
for the foreseeable future.


As John Launchbury has said, given Haskell's current rise in  
popularity, anything that we do not fix with H' will be much harder,  
if not impossible, to fix in the future.


Manuel

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: patch applied (haskell-prime-status): add Make $ left associative, like application

2008-04-23 Thread Manuel M T Chakravarty

Lennart Augustsson:
I my opinion, anyone who suggest changing the associativity of $ is  
insane.
Or just hating every Haskell user.  Changing $ would make virtually  
every Haskell program uncompilable.


Just pick some other (Unicode?) operator, but leave $ alone.


I agree that the power/weight ratio for changing the associativity of  
$ is not convincing at all.[1]  Like the choice of (.) for function  
composition, it is something that Haskell will have to live with,  
forever.


Nevertheless, I think unicode for standard operators is also a no-go  
for Haskell'.  We discussed this on the committee list, and really,  
tool support for unicode is still very poor.  Even cut-copy-paste of  
unicode text between different apps on MacOS -which seems to support  
unicode comparatively well- often doesn't work.  Some applications,  
such as X11 for MacOS, don't seem to support unicode at all.  The  
situation on Linux is even worse.  Moreover, it is often difficult and  
definitely not uniform across platforms how to enter unicode characters.


Manuel

[1] Don't get me wrong, I also stumbled over the associativity of $  
before and I agree that it should be different.  I just don't think  
the gain is worth the hassle.

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: patch applied (haskell-prime-status): add Make $ left associative, like application

2008-04-23 Thread Manuel M T Chakravarty

Sittampalam, Ganesh:

Aaron Denney wrote:

On 2008-04-23, Sittampalam, Ganesh

[EMAIL PROTECTED] wrote:

There's plenty of code out there that doesn't have the benefit of a
vigilant user community ready to spring into action. For example,
Credit Suisse has several tens of thousands of lines of code written



by internal users who are not Haskell experts, and it would be

rather

hard to explain to them that they needed to go through it all and

fix

it.



What makes them need to update to Haskell' instead of sticking with

Haskell '98?

(a) the fact that the code already uses several GHC extensions that  
will

be in Haskell' and we would like to be closer to standard code
(b) the expectation that at some point implementations will stop
supporting
H98


Care for legacy code is important, but H' will have to break backwards  
compatibility in some places.  And especially where you already rely  
on GHC extensions, you can't really expect that H' will adopt features  
that have been available as GHC extensions in exactly the form that  
they were implemented in GHC.


We should be careful about where we break existing code, and we should  
try to support automatic translation of H98 to H' code, but any  
changes that we do not make now will become even more difficult in the  
future when there is even more Haskell code.  Look at what is  
happening now already, industrial users applying pressure on the  
committee to not change the language too much for the sake of legacy  
code.  A clear indication that anything we don't change now, we will  
have to live with forever.


Hence, anything that is *important* to change, we should change now.   
We should mitigate the pain by having a H98 to H' translator and  
Haskell compilers will surely support a Haskell98 compatibility mode  
as long as there are enough users interested in such a feature.  (This  
is not unlike the transition from KR C to ANSI C.)


Manuel

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: patch applied (haskell-prime-status): proposal: remove string gaps

2008-04-01 Thread Manuel M T Chakravarty

John Meacham:

On Tue, Apr 01, 2008 at 09:47:30PM +0100, Neil Mitchell wrote:

   * string gaps cause problems with CPP, which doesn't like the
backslash at the end of the line. (a minor consideration, since CPP  
is

not part of the language, and in any case there is cpphs).


Between the two, I'd say CPP is the much uglier beast. (I tend to  
use m4
actually when I must use a preprocessor in general, it meshes with  
haskell better

and is pretty ubiquitous). In any case, the simple solution of
not using CPP in the same file as string gaps works quite well. It has
never really been an issue before.


Repeat after me: CPP is evil!

Plus I like strings gaps and have used them a lot.

Manuel

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal for stand-alone deriving declarations?

2006-10-05 Thread Manuel M T Chakravarty
Bjorn Bringert:
 On http://hackage.haskell.org/trac/haskell-prime/wiki/ 
 DerivedInstances it says:
 
 - There is no way to derive an instance of a class for a data type  
 that is defined elsewhere (in another module).
 
 Though there is no proposal to fix this. Would such a proposal be  
 appropriate for Haskell'?

I think this would be a useful feature to have.  (I certainly wished to
have independent deriving declarations many times when writing Haskell
code.)  It also seems to be a fairly small, well understood extension.

 If so, I propose to add a top-level declaration on the form:
 
 'deriving' qtycls 'for' qtycon
 
 which produces the same instance as a deriving clause in the  
 declaration of the datatype or newtype would.

I guess, the right way to go about this would be to say that independent
deriving declarations are the fundamental way of deriving a type class.
The original form of a deriving clause at a data/newtype declaration is,
then, just a syntactic shorthand for a data/newtype declaration plus a
bunch of independent deriving declarations.

What is not so nice is that you take a new keyword ('for'), which is
quite likely to have been used as a variable name in existing code.  (Or
does it work out to use one of the 'special' names here?)

I think it would be useful to write the proposal in complete detail up
on the Haskell' wiki.

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: writing / status teams - call for volunteers

2006-09-30 Thread Manuel M T Chakravarty
Andres Loeh:
 Have we already discussed how we produce new text for the report? Is this
 supposed to be all on the Wiki, or are we going to modify the TeX sources
 of the Haskell-98 report? Is the plan to keep the general style and structure
 of the Haskell-98 report, or are we going to rewrite and restructure the
 whole report?

I think for the time being, we should work on the wiki.  The actual
report will have to be written by two or three editors (doing a lot of
cut'n'paste from the wiki).  Otherwise, I doubt we'll get anything
coherent.

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Re[2]: Exceptions

2006-09-06 Thread Manuel M T Chakravarty
Bulat Ziganshin wrote:
 Friday, September 1, 2006, 2:27:34 PM, you wrote:
 
  Thanks for your interest in open data types. As one of the authors of
  the open data types paper, I'd like to comment on the current
  discussion.
 
 i'm not yet read about this, but may be open types have something in
 common with type families already implemented by Manuel Chakravarty?
 
 http://hackage.haskell.org/trac/ghc/wiki/TypeFunctions

Löh/Hinze-style open data types are orthogonal to the type indexed data
types described at the above wiki page.  Instances of type indexed data
types (or indexed type families as I am tending to call them currently)
may not overlap.  For example, while it is fine to write

data family T a :: *
data instance T Bool = TBool
data instance T Int  = TInt

the following two instances are bad:

data instance T (Int, a) = TL   -- BAD
data instance T (a, Int) = TR   -- DEFINITION

as they overlap at T (Int, Int).

In contrast, Löh/Hinze open data types are about *fully* overlapping
data declarations.  So, in their proposal (using a slightly different
syntax)

open data S :: *
S1 :: S
S2 :: S

is a perfectly fine definition.  This distinction between overlapping
and non-overlapping definitions continues on the value level (ie, with
functions operating on these data types).   Given the indexed type
family T above, I can write a function

foo :: T Bool - ()
foo TBool = ()

but I cannot write a *toplevel* function pattern matching on more than
one data instance at once; ie, the following gives a type error:

foo :: T a - a
foo TBool = False   -- BAD
foo TInt  = 0   -- DEFINITION

If I want to write such a function, I need to use a type class, as
follows:

class Foo a where
  foo :: T a - a
instance Foo Bool where
  foo TBool = False
instance Foo Int where
  foo TInt = 0

Again, in contrast, Löh/Hinze open data types enable us to write

open bar :: S - Bool
bar S1 = False
bar S2 = True

So, both features are truly orthogonal and, in fact, they are
synergetic!  More precisely, an alternative syntax for Löh/Hinze open
types are overlapping type families.  So, we might define S
alternatively as

data family S :: *
data instance S = S1
data instance S = S2

Then, one might hope we can allow overlapping indexed type families,
such as the instances for T (Int, a) and T (a, Int) above, and implement
them by a combination of the implementation of indexed data types that I
already added to GHC and Löh/Hinze's method for open data types.

NB: Curiously, the application of open data types that AFAIK got Andres
and Ralf into open data types, namely spine-view SYB
http://www.iai.uni-bonn.de/~loeh/SYB1.html, can already be implemented
with *non-overlapping* indexed type families if I am not mistaken.

 one more question what i still plan to ask him is what is the
 difference between GADTs and type families

GADTS:
  * Closed definition
  * Local type-refinement in case alternatives

Data families:
  * Open definitions (much like classes are open, you can always add
more instances)
  * Type constraints due to indexes is propagated globally

In other words, the relationship between GADTs and data families is not
unlike that between toplevel function definitions (closed) and class
methods (open).  Moreover, you can perfectly well have indexed newtypes.
(They are already implemented and quite interesting as Haskell
guarantees that newtypes are unlifted.)

In fact, there is nothing essential preventing us from having indexed
families of GADTs - well, maybe except the occasional exploding head ;)
For example, you might define

data family T a :: *
data instance T [a] where
  IList :: Int - T [Int]
  BList :: Bool - T [Bool]
data instance T (Maybe a) where
  IMaybe :: Int - T (Maybe Int)
  BMaybe :: Bool - T (Maybe Bool)

(This definition is not supposed to make much sense, it just illustrates
the idea of an indexed GADT.)

However, I haven't fully implemented indexed GADT families yet, as I
want to finish other functionality first.  So, maybe there are problems
that I haven't stumbled over yet.

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Class System current status

2006-05-15 Thread Manuel M T Chakravarty
John Hughes:
 Haskell' should define a standard language for use TODAY--and it
 should be 100% clear what that language is, with no pussy-footing
 around difficult choices. In my view it should include FDs. Then in
 the future they may be replaced--but it should then be clear that this
 IS a replacement, with no arguments of the sort well it's not really
 an incompatible change because FDs were only in an appendix! Let's
 face it, people ARE going to use FDs whatever the standard says,
 because they're just so godamn useful, and rewriting those programs if
 FDs are replaced by ATs is not going to be any easier because it's an
 appendix that's changing, rather than the main body of the report.

I agree that having FDs in the appendix does not make an essential
difference to how easy they are to replace.  Hence, I proposed a
variation on this proposal is that we actually delay issuing the
appendix.  More precisely,

* Specify MPTCs in the main language.

* Finalise Haskell' without an FD/AT appendix.

* Take our time to find out exactly how we want to do type level 
  programming (with FDs, or ATs, or both).  Once we know, we add an 
  appendix on type-level programming.

This moves the MPTC dilemma out of the critical path for Haskell' as a
whole, but avoids that we have to rush the FD/AT issue.

Manuel

PS: I actually thought that this was what Simon proposed when he
originally brought up the appendix idea, which may have been only
between the class system subcommittee members.


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org//mailman/listinfo/haskell-prime


Re: Class System current status

2006-05-13 Thread Manuel M T Chakravarty
Stephanie Weirich:
 Simon Peyton-Jones wrote:
  My suggestion is this:
 
  * Specify MPTCs in the main language
 
  * Specify FDs in an Appendix (with some reasonably conservative
  interpretation of FDs). 
 
  * A Haskell' implementation should implement the Appendix, and
  programmers can write programs against it.  But
  we are advertising specifically that we aren't sure, one way
  or the other, whether FDs will stay in the language for ever
 

 Simon,
 
 Why is an Appendix is better than just a footnote in the Standard that 
 says we aren't sure, one way or the other, whether FDs will stay in the 
 language for ever.  Why do we need this extra structure?

IMHO the right thing is to decouple finalising an FD/AT appendix from
finalising the main body of Haskell'.  This is clearly more easily
realised when the delayed material is out-of-line.

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org//mailman/listinfo/haskell-prime


Re: termination for FDs and ATs

2006-05-08 Thread Manuel M T Chakravarty
Stefan Wehr:
 Manuel M T Chakravarty [EMAIL PROTECTED] wrote::
 
  Martin Sulzmann:
  Manuel M T Chakravarty writes:
Martin Sulzmann:
 A problem with ATs at the moment is that some terminating FD programs
 result into non-terminating AT programs.
 
 Somebody asked how to write the MonadReader class with ATs:
 
  http://www.haskell.org//pipermail/haskell-cafe/2006-February/014489.html

 This requires an AT extension which may lead to undecidable type
 inference:
 
  http://www.haskell.org//pipermail/haskell-cafe/2006-February/014609.html

The message that you are citing here has two problems:

 1. You are using non-standard instances with contexts containing
non-variable predicates.  (I am not disputing the potential
merit of these, but we don't know whether they apply to Haskell'
at this point.)
 2. You seem to use the super class implication the wrong way around
(ie, as if it were an instance implication).  See Rule (cls) of
Fig 3 of the Associated Type Synonyms paper.

  
  I'm not arguing that the conditions in the published AT papers result
  in programs for which inference is non-terminating.
  
  We're discussing here a possible AT extension for which inference
  is clearly non-terminating (unless we cut off type inference after n
  number of steps). Without these extensions you can't adequately
  encode the MonadReader class with ATs.
 
  This addresses the first point.  You didn't address the second.  let me
  re-formuate: I think, you got the derivation wrong.  You use the
  superclass implication the wrong way around.  (Or do I misunderstand?)
 
 I think the direction of the superclass rule is indeed wrong. But what about
 the following example:
 
 class C a
 class F a where type T a
 instance F [a] where type T [a] = a
 class (C (T a), F a) = D a where m :: a - Int
 instance C a = D [a] where m _ = 42
 
 If you now try to derive D [Int], you get
 
  ||- D [Int]
 subgoal: ||- C Int-- via Instance
 subgoal: ||- C (T [Int])  -- via Def. of T in F
 subgoal: ||- D [Int]  -- Superclass

You are using `T [a] = a' *backwards*, but the algorithm doesn't do
that.  Or am I missing something?

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org//mailman/listinfo/haskell-prime


Re: WordPtr,IntPtr,IntMax,WordMax

2006-05-08 Thread Manuel M T Chakravarty
John Meacham:
 On Tue, May 02, 2006 at 03:29:16AM +, Aaron Denney wrote:
  On 2006-04-29, Manuel M T Chakravarty [EMAIL PROTECTED] wrote:
   Am Donnerstag, den 06.04.2006, 16:37 -0700 schrieb John Meacham:
   On Thu, Apr 06, 2006 at 04:28:01PM -0700, John Meacham wrote:
I was curious if ghc could support the following basic types, they will
likely just be aliases of existing types.

WordPtr uintptr_t
WordMax uintmax_t
IntPtr  intptr_t
IntMax  intmax_t

all these C types are defined by ISO C so should be available,
otherwise, they are easy enough to generate in ghcs autoconf script.

jhc provides these under these names in Data.Word and Data.Int.
they would be useful for writing jhc/ghc portable low level code, and
writing 32/64 bit safe code.
   
   oh, I forgot the all important conversion routines,
   
   ptrToWordPtr :: Ptr a - WordPtr
   wordPtrToPtr :: WordPtr - Ptr a
   
   ptrToIntPtr :: Ptr a - IntPtr
   intPtrToPtr :: IntPtr - Ptr a
   
   jhc makes these available in Jhc.Addr, but if ghc decides to provide
   them in a common spot (Foreign.Ptr maybe?)
   
   then I will have jhc follow suit.
   
   I'd also propose these be added to the FFI standard.
  
   I collect additions to the FFI on the Haskell' wiki:
  
 
   http://hackage.haskell.org/trac/haskell-prime/wiki/ForeignFunctionInterface
  
   I added a note about these types.  Any other ISO C types that we should
   include?
  
  complex foo.
 
 not ISO C, but I think they would be very cool to provide
 
  http://gcc.gnu.org/onlinedocs/gcc-4.0.0/gcc/Vector-Extensions.html
 
 various vector (SSE, MMX, SIMD) types that are machine independent and
 compiled to whatever vector processing instructions the underlying arch
 supports. (pretty much all support some sort of them nowadays)

I agree that this would be cool, but I think it goes beyond Haskell'.
It is not just a matter of data types.  For it to be useful, you need to
generate the right code, too.

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org//mailman/listinfo/haskell-prime


Re: FFI proposal: allow some control over the scope of C header files

2006-05-08 Thread Manuel M T Chakravarty
Malcolm Wallace:
 Manuel M T Chakravarty [EMAIL PROTECTED] wrote:
 
  This leaves me with the opinion that we should really leave this as
  pragma and not make it into FFI syntax.  It's a hint to some
  implementations and irrelevant to others.
 
 Ah well, if we use that eminently sensible criterion, then the
 safe/unsafe annotation on foreign imports ought to be in a pragma too.
 For some implementations (yhc/nhc98) it is simply irrelevant, it is
 really a ghc-ism.  :-)

I think safe/unsafe is more fundamental for two reasons:

 1. nhc currently has it easier than GHC as it doesn't support
concurrency.  Although, we didn't provide an explicit features
for concurrency in the FFI addendum, we tried to co-exist.
 2. safe/unsafe is about enabling an optimisation.  Implementations
are of course free to not apply that optimisation, and then they
don't care about the annotation.  So the real question is, if
nhc would want to achieve the same level of performance as GHC,
could it still ignore the annotation?

So, I guess, I need to refine my criterion: We leave an annotation as a
pragma if it is a hint to some implementation and irrelevant to others
that can ignore achieve comparable levels of performance while ignoring
it.  (Strictly speaking, I guess there is still an exception if it is
generally *much* easier to achieve good performance when taking the
annotation into account.)

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org//mailman/listinfo/haskell-prime


Re: FFI proposal: allow some control over the scope of C header files

2006-05-08 Thread Manuel M T Chakravarty
John Meacham:
 On Mon, May 08, 2006 at 05:50:47PM -0400, Manuel M T Chakravarty wrote:
   1. nhc currently has it easier than GHC as it doesn't support
  concurrency.  Although, we didn't provide an explicit features
  for concurrency in the FFI addendum, we tried to co-exist.
 
 actually, I believe all haskell implementations already have or are
 working on concurrency. I know Einar is pretty close to adding support
 to jhc, yhc has it, and hugs has a lot of the framework done so it
 shouldn't be too hard to bring it all the way.

Excellent.

   2. safe/unsafe is about enabling an optimisation.  Implementations
  are of course free to not apply that optimisation, and then they
  don't care about the annotation.  So the real question is, if
  nhc would want to achieve the same level of performance as GHC,
  could it still ignore the annotation?
 
 Also, at some point optimization problems become correctness ones if
 they are vital for getting usable performance.

I agree.

 I am not sure if you read it, but there has been _a lot_ of discussion
 about FFI annotations in the concurrency threads. 

Yes, I saw that.

 there is a basic
 summary of our results on the Concurrency page on the wiki. the basic
 consensus is to drop the ghc-specific safe vs unsafe and annotate ffi
 calls with what your actual intent is. as in 'nonreentrant' if the code
 doesn't call back into haskell and 'concurrent' if the haskell runtime
 needs to arrange to run concurrently with it. the exact names and
 defaults are still being worked out, but I think we have a good
 consensus on at least what different annotations we need in order to
 give compilers of all sorts of implementation models exactly what info
 they need.

That's great.  The current FFI standard stayed away from concurrency, as
there was no concurrency standard, but now that we get one, the FFI has
to synchronise with that.

  So, I guess, I need to refine my criterion: We leave an annotation as a
  pragma if it is a hint to some implementation and irrelevant to others
  that can ignore achieve comparable levels of performance while ignoring
  it.  (Strictly speaking, I guess there is still an exception if it is
  generally *much* easier to achieve good performance when taking the
  annotation into account.)
 
 it is fuzzy. some programs rely on NOINLINE for correctness, but of
 course they are making all sorts of assumptions about the underlying
 implementation so it isn't really portable anyway. for instance the
 NOINLINE unsafePerformIO newIORef trick for global state just doesn't
 work on jhc and it would be quite tricky to make it otherwise. Not that
 this is a new or particularly pressing issue as we will eventually hash
 everything out.

IMHO, NOINLINE unsafePerformIO newIORef is outside anything guaranteed
to work by our current standards.  Hence, programs that rely on NOINLINE
for correctness are bad programs - maybe useful, but bad!  So, jhc is
perfectly alright in that respect.  I wish we had a nicer alternative
for this dangerous idiom...

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org//mailman/listinfo/haskell-prime


Re: unsafePerformIO and cooperative concurrency

2006-04-29 Thread Manuel M T Chakravarty
Ashley Yakeley:
 Simon Marlow wrote:
  Given that unsafePerformIO is (perhaps) to be part of the Haskell'
  standard, 
 
 Is there a ticket for this? I would prefer that unsafePerformIO and 
 friends not be part of the standard.

Well, you need it for the FFI (for marshalling in pure foreign imports).
Plus you can efin eyour own one using the FFI.

Manuel


 At risk of drifting off-topic, I think it would be better to find 
 examples where unsafePerformIO is needed and used safely, and come up 
 with safe functions for that (but not necessarily as part of the Prime 
 effort). IIRC the darcs code uses it a lot.
 

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: WordPtr,IntPtr,IntMax,WordMax

2006-04-29 Thread Manuel M T Chakravarty
Am Donnerstag, den 06.04.2006, 16:37 -0700 schrieb John Meacham:
 On Thu, Apr 06, 2006 at 04:28:01PM -0700, John Meacham wrote:
  I was curious if ghc could support the following basic types, they will
  likely just be aliases of existing types.
  
  WordPtr uintptr_t
  WordMax uintmax_t
  IntPtr  intptr_t
  IntMax  intmax_t
  
  all these C types are defined by ISO C so should be available,
  otherwise, they are easy enough to generate in ghcs autoconf script.
  
  jhc provides these under these names in Data.Word and Data.Int.
  they would be useful for writing jhc/ghc portable low level code, and
  writing 32/64 bit safe code.
 
 oh, I forgot the all important conversion routines,
 
 ptrToWordPtr :: Ptr a - WordPtr
 wordPtrToPtr :: WordPtr - Ptr a
 
 ptrToIntPtr :: Ptr a - IntPtr
 intPtrToPtr :: IntPtr - Ptr a
 
 jhc makes these available in Jhc.Addr, but if ghc decides to provide
 them in a common spot (Foreign.Ptr maybe?)
 
 then I will have jhc follow suit.
 
 I'd also propose these be added to the FFI standard.

I collect additions to the FFI on the Haskell' wiki:

  http://hackage.haskell.org/trac/haskell-prime/wiki/ForeignFunctionInterface

I added a note about these types.  Any other ISO C types that we should
include?

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: termination for FDs and ATs

2006-04-29 Thread Manuel M T Chakravarty
Ross Paterson:
 On Thu, Apr 27, 2006 at 12:40:47PM +0800, Martin Sulzmann wrote:
  Yes, FDs and ATs have the exact same problems when it comes to termination.
  The difference is that ATs impose a dynamic check (the occurs check)
  when performing type inference (fyi, such a dynamic check is sketched
  in a TR version of the FD-CHR paper).
 
 Isn't an occurs check unsafe when the term contains functions (type
 synonyms)?  You might reject something that would have been accepted
 if the function had been reduced and discarded the problematic subterm.

We account for that, but before explaining how, please let me emphasis
that the occurs check is not something newly introduced with ATs, it's
already part of plain HM type inference - it's just that it gets more
interesting with ATs.  Hence, I don't agree with Martin's comparison to
FD type inference, where people are discussing new dynamic checks.

Now, concerning ATs, the interesting rules is (var_U) of Fig 5 of  
Associated Type Synonyms http://www.cse.unsw.edu.au/~chak/papers/CKP05.html
All this rule says is that, given an equality of the form

  a = t

we cannot simplify this to the substitution [t/a] iff a \in FV(t).
However, it does *not* say that we have to reject the program at this
point.

If t does not contain any type functions, then we can indeed reject the
program, as the equality cannot be satisfied.  However, as you rightly
point out, this is not generally the case in the presence of type
functions; eg, (a = t) could be equivalent to (a = S a) and we might
have a definition for S reading

  S Int = Int

Then, if a is at some point instantiated to Int, the equality obviously
holds.  In other words, if we have (a = t) with a \in FV(t) and the
occurrence of a in t is below a type function, we don't know yet whether
or not to reject the program.

It is no problem to account for this uncertainty in AT type inference.
The crucial difference between AT type inference and inference for
vanilla type classes is that the constraint context includes equalities
in addition to class predicates; ie, whenever we come across an equality
t1 = t2 that we cannot solve yet (by unification), we put it into the
constraint context.  It may be solved later after some more
substitutions have been applied.  If not, it may end up in the signature
of whatever binding we are currently typing (during generalisation).

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: termination for FDs and ATs

2006-04-29 Thread Manuel M T Chakravarty
Martin Sulzmann:
 A problem with ATs at the moment is that some terminating FD programs
 result into non-terminating AT programs.
 
 Somebody asked how to write the MonadReader class with ATs:
 http://www.haskell.org//pipermail/haskell-cafe/2006-February/014489.html

 This requires an AT extension which may lead to undecidable type
 inference:
 http://www.haskell.org//pipermail/haskell-cafe/2006-February/014609.html

The message that you are citing here has two problems:

 1. You are using non-standard instances with contexts containing
non-variable predicates.  (I am not disputing the potential
merit of these, but we don't know whether they apply to Haskell'
at this point.)
 2. You seem to use the super class implication the wrong way around
(ie, as if it were an instance implication).  See Rule (cls) of
Fig 3 of the Associated Type Synonyms paper.

This plus the points that I mentioned in my previous two posts in this
thread leave me highly unconvinced of your claims comparing AT and FD
termination.

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: FFI proposal: allow some control over the scope of C header files

2006-04-29 Thread Manuel M T Chakravarty
John Meacham:
 It is my understanding that the FFI foreign imports declare an ABI and
 not an API, meaning the exact way to make the foreign call should be
 completely deterministic based on just what is in the haskell file
 proper. Otherwise, obviously, direct to assembly implementations would
 be impossible.
 
 In this sense, include files are always potentially optional, however,
 due to the oddness of the C langauge, one cannot express certain calls
 without proper prototypes, current haskell implementations take the
 straightforward path of relying on the prototypes that are contained in
 the system headers, which also incidentally provides some safety net
 against improperly specified FFI calls. However, it would also be
 reasonable for an implementation to just generate its own prototypes, or
 use inline assembly or any other mechanism to implement the FFI ABI
 calls properly.

Exactly!  The FFI Addendum specifically leaves the compiler complete
freedom as to which method to choose.  I regard this property of the
specification as important and we should keep it for Haskell'.

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Class ATs Question

2006-04-28 Thread Manuel M T Chakravarty
Ashley Yakeley:
 You can do two-way fundeps. Can these be done with associated types? For 
 instance:
 
class HasSign u s | u - s, s - u where
  unsignedToSigned :: u - s
  signedToUnsigned :: s - u
 
instance HasSign Word8 Int8 where
  ...
 
 It might not be a great loss if not.

All FD programs that fulfil the weak coverage condition (which for all
practical purposes is as good as all FD programs) can be translated to
AT programs using a fairly simple translation scheme, as I recently
realised (but I must say that it was Martin Sulzmann's AFD paper that
inspired me):

  
http://www.cse.unsw.edu.au/~chak/haskell/BetterAssociatedTypes_2fClassEqualities.html

Does anybody have a similarly simple and comprehensive translation of FDs to 
ATs?

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: FFI proposal: allow some control over the scope of C header files

2006-04-23 Thread Manuel M T Chakravarty
Duncan Coutts:
 On Fri, 2006-04-21 at 09:32 -0400, Manuel M T Chakravarty wrote:
 
   I think we'd want to be able to specify that a C header file not
   escape a module boundary and probably we'd also want to be able to ask
   that it not escape a package boundary (though this may be beyond the H'
   spec since Haskell does not talk about packages).
  
  The H98 standard already specifies a NOINLINE pragma for any function:
  
http://haskell.org/onlinereport/pragmas.html
  
  The simplest solution is to ensure that all Haskell compilers implement
  this pragma properly for foreign imported functions.  If you want finer
  control over where inlining takes place, then maybe the pragma should be
  extended to provide that finer control.
 
 I don't think we need to generalise the problem to all function
 inlinings. There are specific practical problems caused by inlining
 foreign calls that are not a problem for ordinary Haskell functions.

Inlining of foreign functions causes extra problems, but generally
inlining is a concern; so, if we can use the same mechanisms, we get a
simpler language.

  Besides, the standard so far doesn't cover command line options at all.
  So, there is the more general question of whether it should.
 
 I don't think we need to specify the command line interface. The
 required headers can be put in the module.

That's ok with me.  I was just pointing out that many of the problems
and/or lack of understanding of users that we are seeing has to do with
the use of command line options.  We simply cannot address this unless
the standard covers command line options.

   So some syntax off the top of my head:
   
   foreign import cheader module-local foo/bar.h
   
   I think there are 3 possibilities for the C header escape/scope setting
   (which should probably be manditory rather than optional):
   module-local
   package-local (extension for compilers that have a notion of a package)
   global
  
  Is this additional complexity really necessary or would the use of
  NOINLINE pragmas not suffice?  It's really in a library context where
  you want to restrict the inlining of foreign functions, but there the
  foreign functions are probably not much used inside the library itself,
  but mainly exported, so I doubt that you would get much of a performance
  loss by just tagging all foreign imported functions that you don't want
  to escape as NOINLINE.
 
 What I really want is for the issue of header scope to be something that
 can be checked by the compiler. As a distro packager I see far too many
 people getting it wrong because they don't understand the issue. If we
 could declare the intended scope of the header files then 1. people
 would think about and 2. if they got it wrong it'd be checkable because
 the compiler would complain.

Whether or not the compiler can check for wrong use, seems to me
independent of whether we use inline pragmas or any other syntax.  GHC
could very well check some of these things today.  It just doesn't.  Do
you propose to make such checks mandatory in the standard?

 As it is at the moment people don't know they're doing anything dodgy
 until some user of their package gets a mysterious gcc warning and
 possibly a segfault.
 
 If we just tell everyone that they should use NOINLINE then they won't
 and they'll still get it wrong.
 
 The reason for some specific syntax rather than using NOINLINE is that
 the compiler will be able to track the header files needed by each
 module. So we can avoid the situation where a call gets made outside the
 scope of its defining header file - either by automatically #including
 the header file in the right place, or by complaining if the user does
 not supply the header (eg by putting it in the .cabal file).
 
 So it's not the general issue of inlining but the specific problem of
 what C header files are required to compile what modules.
 
 The ideal situation I imagine is that the scope of the headers can be
 checked automatically so that the compiler or cabal will complain to a
 library author that their private header file needs to be marked as
 local to the package/module or included in the library package file and
 installed with the package.

We are having two issues here:

(1) Specification of which functions need what headers and whether 
these functions can be inlined.
(2) Let the compiler spot wrong uses of header files.

These two issues are largely independent.  Re (1), I dislike new syntax
(or generally any additions to the language) and prefer using existing
mechanisms as far as possible.  The reason is simply that Haskell is
already very complicated.  Haskell' will be even more complicated.
Hence, we must avoid any unnecessary additions.

Re (2), I am happy to discuss what kind of checks are possible, but I am
worried that it'll be hard to check for everything without assistance
from cabal, which I don't think will be part of Haskell'.

Re the concern about wrong use: FFI programming

Re: FFI proposal: allow some control over the scope of C header files

2006-04-21 Thread Manuel M T Chakravarty
Duncan Coutts:
 One problem that people writing FFI bindings often run into is that they
 do not understand exactly where C header files are required to be
 available.
 
 The easy case is importing some C function defined in a well known and
 widely available C header file (eg gtk/gtk.h). In this case we just make
 sure that header is available for compiling every module in the package
 and add that header file to the package info file (or .cabal file) so
 that every module that uses the package will have the C header
 available. In this case there is no problem with C calls being inlined
 outside of the module which imported them since the C header file will
 be available everywhere.
 
 The tricky case is that people often use private header files that are
 #included when compiling a module/package but are not installed along
 with that package and so are not #included when compiling client
 modules. Most of the time this works, however the Haskell compiler is
 allowed to inline across modules and if it chooses to inline the C call
 into a client module then things will break. Sadly it still compiles and
 sometimes even works since C allows calling a C function without a
 prototype. However occasionally it's going to break horribly.

 Allowing us to limit where the C headers will be required would be very
 useful. Sometimes it is very convenient to have private header files
 that will not be installed with the package. It is also sometimes the
 case that it's much more convenient to not require that the user has a
 set of C header files installed to be able to use a library package.
 Examples of this include some windows packages, eg DriectX where it's
 rather inconvenient to require that users have the MS DirectX SDK
 installed.

I understand these concerns, but they are tightly coupled to two
mechanisms that are currently not really standardised: (1) cross-module
function inlining and (2) command line options.

 Currently GHC has a de-facto way of limiting the required scope of C
 header files to a module - by using the standard FFI syntax (!). I know
 people are already using this trick to allow the use of private header
 files.
 
 This issue also touches on the related issue that the way of specifying
 C header files in the FFI spec is not really optimal. GHC implements a
 couple other methods and these are probably used more that the method in
 the FFI spec.
 
 So I suggest that we briefly consider some possibilities for extending
 control over where C header files will be needed and perhaps also for
 specifying what C header files are needed in the first place.
 
 I think we'd want to be able to specify that a C header file not
 escape a module boundary and probably we'd also want to be able to ask
 that it not escape a package boundary (though this may be beyond the H'
 spec since Haskell does not talk about packages).

The H98 standard already specifies a NOINLINE pragma for any function:

  http://haskell.org/onlinereport/pragmas.html

The simplest solution is to ensure that all Haskell compilers implement
this pragma properly for foreign imported functions.  If you want finer
control over where inlining takes place, then maybe the pragma should be
extended to provide that finer control.

 It would also be convenient to be able to specify that a module needs a
 particular C header file rather than having to specify it in each
 foreign import decl. Currently this can be done by cabal in a
 compiler-specific way (it uses ghc's -#include command line mechanism)

If you don't specify it in every import declaration, the compiler won't
know what to include if you allow inlining and the compiler does perform
cross-module inlining.

Besides, the standard so far doesn't cover command line options at all.
So, there is the more general question of whether it should.

 It's a reasonable question to ask if specifying a C header file should
 go in the module source code or elsewhere (eg a .cabal file) since
 afterall we don't specify search paths etc in the module. I'd say that
 it is right that the name of the header file be in the module source
 code and that the search paths etc be external.
 
 So some syntax off the top of my head:
 
 foreign import cheader module-local foo/bar.h
 
 I think there are 3 possibilities for the C header escape/scope setting
 (which should probably be manditory rather than optional):
 module-local
 package-local (extension for compilers that have a notion of a package)
 global

Is this additional complexity really necessary or would the use of
NOINLINE pragmas not suffice?  It's really in a library context where
you want to restrict the inlining of foreign functions, but there the
foreign functions are probably not much used inside the library itself,
but mainly exported, so I doubt that you would get much of a performance
loss by just tagging all foreign imported functions that you don't want
to escape as NOINLINE.

Manuel


___

Re: Concurrency, FFI status

2006-04-21 Thread Manuel M T Chakravarty
Simon Marlow:
 I have now summarised the concurrency proposal status, here:
 
  
 http://hackage.haskell.org/cgi-bin/haskell-prime/trac.cgi/wiki/Concurren
 cy
 
 I have tried to summarise the various points that have arisen during the
 discussion.  If anyone feels they have been mis-paraphrased, or I have
 forgotten something, please feel free to edit, or send me some text for
 inclusion.  I don't want to include long gobs of text in here, though:
 just summarise the main points, and if necessary link to relevant
 mailing list posts.

Good summary.

Concerning the issue of preemptive versus cooperative concurrency, I
still think cooperative concurrency is pretty useless.  Is there any
non-toy application that actually uses Hugs' current cooperative
concurrency?

Concerning the trouble of Hugs and Jhc to implement preemptive
concurrency, IMHO that's a significant design flaw in these
implementations.  Preemptive concurrency is important for many
applications and, if anything, will become more important with new
architectures.  Fundamental limits on being able to support this,
fundamentally limit the application space.  I'd rather not see that
design flaw being transferred from these implementations to the language
standard.

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: MPTCs and functional dependencies

2006-04-09 Thread Manuel M T Chakravarty
Simon Peyton-Jones: 
 My current take, FWIW.
 
 * MPTCs are very useful.  They came along very rapidly (well before
 H98).  I think we must put them in H'
 
 * But MPTCs are hamstrung without FDs or ATs
 
 * FDs and ATs are of the same order of technical difficulty, as Martin
 says

Both FDs and ATs come in various versions of different levels of
expressiveness.  They may be of the same level of difficulty with all
bells and whistles, but that's a bit of a red herring.  The real
question is how do the levels of difficulty compare at the level of
expressiveness that we want.  Or phrased differently, for a version that
is not too difficult, how does the expressiveness compare.

 * ATs are (I believe) a bit weaker from the expressiveness point of view
 (zip example), but are (I believe) nicer to program with.  

The zip example can be done with ATs - it's actually in one of Martin's
papers.  I currently don't know of any FD example that you can't do with
ATs.  It's a matter of what forms of AT definitions you want to allow.

 * BUT we have way more experience with actually programming with FDs.
 ATs fail the well-established test by a mile.

Indeed!

 * Largely due to Martin's work, we now have a much better handle on just
 what restrictions on FDs make type inference tractable.  So I believe
 there is a solid MPTC+FD story that we could embody in H'.
 
 * Medium term, I think ATs may *at the programming-language level*
 displace FDs, because they are nicer to program with.  But that's just
 my opinion, and we don't have enough experience to know one way or the
 other.

Maybe not only at the programming-language level.  Given our latest paper,

  http://www.cse.unsw.edu.au/~chak/papers/SCP06.html

for example, the translation of ATs is simpler than FDs if we also have
existential types (but admittedly that became clear to us only after
your email message).

 Tentative conclusion: H' should have MPTC + FDs, but not ATs.

My conclusion is that we should not include FDs or ATs into the standard
at the moment.  Standardising FDs as a stopgap measure may easily put us
into the same situation that we are having with records at the moment.
Nobody is really happy with it, but we don't dare to change it either.

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Concurrency (was: RE: Re[2]: important news: refocusingdiscussion)

2006-03-29 Thread Manuel M T Chakravarty
John Meacham:
 On Wed, Mar 29, 2006 at 11:56:41AM +0100, Simon Marlow wrote:
  Fair enough - I take that as a vote for a concurrency addendum.
 
 Actually, I think there is a lot we can standardize in a portable way
 when it comes to concurrency without compromising the ability for any
 compiler to implement it and I think it would be very worthwhile to do
 so. in the report proper.
 
  
  I think it's a bit unfair to talk about GHC-style concurrency.  There
  are many different ways to implement exactly what GHC currently
  provides.  In fact, we were very careful when designing it to ensure
  that this was the case:
 
 yeah, when I say GHC style concurrency, I mean the interface that ghc
 has. forkIO,MVar, etc... as opposed to event-loop, O'Haskell, expliticly
 scheduled, manual continuations, etc.. 

As I see it, it's really only GHC's API which is up for discussion for
inclusion in Haskell', as we we decided that we largely want to go with
already implemented and tested approaches.

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: Re[2]: important news: refocusing discussion

2006-03-28 Thread Manuel M T Chakravarty
Simon Marlow:
 On 26 March 2006 03:44, Ross Paterson wrote:
 
  On Sat, Mar 25, 2006 at 05:31:04PM -0800, isaac jones wrote:
  I have no idea if it would work, but one solution that Simon didn't
  mention in his enumeration (below) is that we could find a group of
  people willing to work hard to implement concurrency in Hugs, for
  example, under Ross's direction.
  
  I'm no expert on Hugs internals, and certainly not qualified to direct
  such an effort, but I don't have great hopes for it.  Apart from the
  fact that Hugs is written in a legacy language and uses a quite a bit
  of global state, it also makes heavy use of the C stack, and any
  implementation that does that will have trouble, I think.
 
 Yes, I don't see an easy way to do it.  You could have one OS thread per
 Haskell thread (let the OS manage the separate C stacks), a giant lock
 around the interpreter (to protect all the global state), and explicit
 yield() from time to time to simulate pre-emption.  This isn't too bad,
 but you still have to implement GC somehow, and hence traverse all the
 live C stacks, and that sounds tricky to me.

True, but so what?  I mean, honestly, we should decide language features
by their merit to applications and maturity.  We should also take into
account what the power/weight ratio of a feature is in terms of general
implementation costs.  But discussing the costs to one particular
implementation that's already been stretched light years beyond what it
originally was intended for, seems a bit much.

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: important news: refocusing discussion

2006-03-24 Thread Manuel M T Chakravarty
Simon Marlow:
 On 24 March 2006 12:28, Ross Paterson wrote:
 
  On Fri, Mar 24, 2006 at 11:30:57AM -, Simon Marlow wrote:
  So I believe the issue is mainly one of perspective.  Until I wrote
  this email I hadn't thought of (4) and my preference was for (2),
  but now I quite like the idea of (4).  We would include concurrency
  in Haskell', but provide a separate addendum that specifies how
  imlementations that don't provide concurrency should behave.  One
  advantage of (4) over (3) is that we can unambiguously claim that
  Haskell' has concurrencey. 
  
  And we can unambiguously state that there is only one Haskell'
  implementation (though a second is on the way).
  
  Sure, concurrency is essential to many applications, and should be
  precisely specified.  But it is also irrelevant to a lot of uses of
  Haskell (except for ensuring that one's libraries are also usable on
  concurrent implementations, as JohnM said).  A specification of the
  language without concurrency would be at least as valuable (having
  more implementations).  Perspective, as you say -- most people agree
  we need both -- but I think you're a bit too negative about the
  smaller variant. 
 
 This is just a difference of opinion, and probably won't be easily
 resolved.  It comes down to whether you think Haskell' should be a
 language that is wide enough to include such applications as a web
 server, or whether it has to stop short of including concurrency because
 it's too hard to implement (and it's not always hard - the YHC guys
 managed it in a matter of days, but I do realise it would be hard in
 Hugs).
 
 I think it would be a mistake to relegate concurrency to an addendum; it
 is a central feature of the language, and in fact is one area where
 Haskell (strictly speaking GHC) is really beginning to demonstrate
 significant advantages over other languages.  We should make the most of
 it.

I 100% agree!!  Personally, I think, after the FFI, a good story about
concurrency and exceptions is what H98 misses most for applications
other than variations on the compiler theme.

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: seq as a class method

2006-03-24 Thread Manuel M T Chakravarty
John Hughes:
 Wolfgang Jeltsch:
 it seems that there is not yet a ticket about putting seq into a type class 
 (again).

And I hope it stays that way.

 This sounds like a good idea in principle, but it was a nightmare in 
 practice.
 
 First, the implementation details and the difference between _|_ and 
 const _|_
 make a difference to space behaviour, and one needs a way to control that.
 Hiding the differences can make space leaks *impossible* to fix.

Along similar lines: I like Haskell being lazy, but it has to make it
easier for the programmer to enforce eager evaluation where necessary
for good resource utilisation.  `seq' already is annoying and
inconvenient (as it forces you to re-arrange your code), let's not make
it worse.  I'd like Haskell' to make it easier to force evaluation,
which is why I like the bang pattern proposal.

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Re[2]: Strict tuples

2006-03-22 Thread Manuel M T Chakravarty
Taral:
 On 3/22/06, Bulat Ziganshin [EMAIL PROTECTED] wrote:
  ghc uses unboxed tuples just for such sort of optimizations. instead
  of returning possibly-unevaluated pair with possibly-unevaluated
  elements it just return, say, two doubles in registers - a huge win
 
 I have no doubt of this. My comment refers to the idea that somehow
 such strictness annotations are (a) required at the type level and (b)
 required at all to enable such optimization. I believe the
 optimization happens without any annotation from the user, and it
 should stay that way.

It does happen...sometimes!  The trouble is that for certain types of
programs (eg, numeric intensive ones), you absolutely need that
optimisation to happen.  Without strict tuples, this means, you have to
dump the intermediate code of the compiler and inspect it by hand to see
whether the optimisation happens.  If not, you have to tweak the source
to nudge the compiler into recognising that it can optimise.  Of course,
all your efforts may be wasted when the next version of the compiler is
released or when you have to change your code.

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: important news: refocusing discussion

2006-03-22 Thread Manuel M T Chakravarty
Simon Marlow:
 On 21 March 2006 23:51, isaac jones wrote:
 
  Concurrency is summarized here:
 
 http://haskell.galois.com/cgi-bin/haskell-prime/trac.cgi/wiki/Concurrenc
 y
 
 I have updated the concurrency page with a skeleton proposal.

Yes, good plan.

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Strict tuples

2006-03-21 Thread Manuel M T Chakravarty
John Meacham:
 On Mon, Mar 20, 2006 at 09:39:41AM -0500, Manuel M T Chakravarty wrote:
  Apart from the syntactic issues, does anybody else support the idea of
  strict tuples as proposed?  I just want to know whether I am alone on
  this before putting it on the wiki.
 
 I have a few issues though, not entirely easy to articulate.
 
 I worry about all the (! .. !) types that will appear in interfaces,
 making things like (map fst) not work. It has been my experience that a
 lot of things that should be strict that are obvious to the user, are
 often obvious to the compiler as well. having the user place redundant
 strictness annotations in can ofsucate where the actual performance
 fixes are. As in, are lazy tuples actually a source of problems or are
 we just guessing? ghc's strictness analyzer is pretty darn good, If
 something is subtle enough for the compiler not to catch it, then the
 programmer probably won't right off the bat either. it usually takes
 profiling to determine where the human-fixable problems are.

I agree that strict tuples can be abused, but that's true for most
language features.

 strictness does not belong in the type system in general. strictness
 annotations are attached to the data components and not type components
 in data declarations because they only affect the desugaring of the
 constructor, but not the run-time representation or the types in
 general. attaching strictness info to types is just the wrong thing to
 do in general I think.

I am *not* proposing any addition or change to the type system.  In H98,
I can define

  data Pair a b = Pair a b
  data StrictPair a b = StrictPair !a !b

For some reason, we have Pair with special syntax pre-defined, but we
haven't got StrictPair pre-defined.  All I am proposing is to also
pre-define StrictPair.

 however, strict tuples I think would have use in function returns,
 no need to declare them as a separate type, just have
 
 (! a,b !) desugar exactly to a `seq` b `seq` (a,b)
 
 this avoids any type issues and the only time the strictness of a
 constructor comes into play is in the constructor desugaring anyway, it
 makes sense that strict tuples would be a simple desugaring to normal
 tuples as well.

The disadvantage of this scheme is that the consumer of a strict tuple,
then, has no knowledge of the fact that the components are already
evaluated - ie, this wastes a good opportunity for optimisations.

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: Strict tuples

2006-03-20 Thread Manuel M T Chakravarty
Simon Marlow:
 Not to mention overlap with sections:  (!i).  Even with just bang
 patterns, we have some interesting parsing problems due to the overlap
 with infix '!'.  eg., now 
 
   arr ! x = indexArray arr x
 
 will probably parse as
 
   arr (!x) = indexArray arr x
 
 which means that in order to define (!) you have to use the prefix form:
 (!) arr x = ...
 
 GHC's implementation of bang pattern parsing has some ugliness to deal
 with this.  In the report, we will have to be very careful to make sure
 the syntax doesn't have any ambiguities in this area, which will
 probably mean adding special cases to the grammar.
 
 My suggestion is to avoid these problems by removing infix '!' from the
 syntax:
 
 http://haskell.galois.com/cgi-bin/haskell-prime/trac.cgi/wiki/ArrayIndex
 ing
 
 I realise this is a code-breaking change, but I consider the special
 cases introduced to the syntax by bang patterns to be rather warty.
 Also, since I think many of us envisage Haskell moving towards having
 more strictness annotations in the future, it makes sense to
 consistently use the '!' operator to mean strict.

I agree that the use of ! for indexing is a bad choice, actually a very
bad choice.  As arrays are not used that much and (!) isn't even
exported from the Prelude, I like the idea of changing the indexing
syntax.  I am less convinced that it is wise to change the syntax of
function composition, as this will break a huge set of programs.  I
actually also don't see that this affects the array proposal.  (.#)
would be a valid and free operator anyway, wouldn't it?  What about list
indexing? Use (.##)?  (Doesn't look very nice, but transfers the (!) for
arrays and (!!) for lists idea.)  A change to list indexing will
probably break more programs than a change to array indexing.

Apart from the syntactic issues, does anybody else support the idea of
strict tuples as proposed?  I just want to know whether I am alone on
this before putting it on the wiki.

Manuel

 On 19 March 2006 02:35, Manuel M T Chakravarty wrote:
  Loosely related to Ticket #76 (Bang Patterns) is the question of
  whether we want the language to include strict tuples.  It is related
  to bang patterns, because its sole motivation is to simplify enforcing
  strictness for some computations.  Its about empowering the programmer
  to choose between laziness and strictness where they deem that
  necessary without forcing them to completely re-arrange
  sub-expressions (as seq does).
  
  So what are strict tupples?  If a lazy pair is defined in pseudo code
  as 
  
data (a, b) = (a, b)
  
  a strict pair would be defined as
  
data (!a, b!) = ( !a, !b )
  
  Ie, a strict tuple is enclosed by bang parenthesis (! ... !).  The use
  of the ! on the rhs are just the already standard strict data type
  fields.
  
  Why strict tuples, but not strict lists and strict Maybe and so on?
  Tuples are the Haskell choice of returning more than one result from a
  function.  So, if I write
  
add x y = x + y
  
  the caller gets an evaluated result.  However, if I write
  
addmul x y = (x + y, x * y)
  
  the caller gets a pair of two unevaluated results.  Even with bang
  patterns, I still have to write
  
addmul x y = let !s = x + y; !p = x * y in (s, p)
  
  to have both results evaluated.  With strict tuples
  
addmul x y = (!x + y, x * y!)
  
  suffices.
  
  Of course, the caller could invoke addmul using a bang patterns, as in
  
let ( !s, !p ) = addmul x y
in ...
  
  but that's quite different to statically knowing (from the type) that
  the two results of addmul will already be evaluated.  The latter
  leaves room for more optimisations.
  
  Syntax issues
  ~
  * In Haskell (,) is the pair constructor.  What should be use for
strict tuples?  (!,!) ?
  * With strict tuples (! and !) would become some sort of
reserved/special symbol.  That interferes with bang patterns, as
(!x, y!) would be tokenized as (! x , y !).  We could use ( ... !)
for strict tuples to avoid that conflict, or just requires that the
user write ( !x, !y ) when they want a bang pattern.  (Just like you
cannot write `Just.x' to mean `Just . x' as the former will always
be read as a qualified name and not the application of function
composition.
 

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: the MPTC Dilemma (please solve)

2006-03-20 Thread Manuel M T Chakravarty
Claus Reinke:
  In fact, it's quite worrying that FDs have been around for so long and
  still resisted a thorough understanding.
 
 they don't resist. but as long as progress is example-driven and scary
 stories about FDs supposedly being tricky and inherently non-under-
 standable are more popular than investigations of the issues, there 
 won't be much progress. please don't contribute to that hype.

When I say hard to understand, I mean difficult to formalise.  Maybe I
have missed something, but AFAIK the recent Sulzmann et al. paper is the
first to thoroughly investigate this.  And even this paper doesn't
really capture the interaction with all features of H98.  For example,
AFAIK the CHR formalisation doesn't consider higher kinds (ie, no
constructor classes).

 it is okay to advertize for your favourite features. in fact, I might 
 agree with you that a functional type-class replacement would be 
 more consistent, and would be a sensible aim for the future. 
 
 but current Haskell has type classes, and current practice does use 
 MPTCs and FDs; and you don't do your own case any favours by 
 trying to argue against others advertizing and investigating their's. 

I don't care whether I do my case a favour.  I am not a politician.
There is only one reason that ATs exist: FDs have serious problems.

Two serious problems have little to do with type theory.  They are more
like software engineering problems:

 I. One is nicely documented in

http://www.osl.iu.edu/publications/prints/2003/comparing_generic_programming03.pdf
II. The other one is that if you use FDs to define type-indexed
types, you cannot make these abstract (ie, the representations
leak into user code).  For details, please see the Lack of
abstraction. subsubsection in Section 5 of
http://www.cse.unsw.edu.au/~chak/papers/#assoc

 you reply to a message that is about a month old.

That's what re-locating around half of the planet does to your email
responsiveness...but the topic is still of interest and I got the
impression that your position is still the same.

 for instance, ATS should just be special syntax for a limited, but 
 possibly sufficient and more tractable form of MPTCs/FDs, and 
 as long as that isn't the case in practice because of limitations in
 current implementations or theory, we don't understand either 
 feature set sufficiently well to make any decisions.

ATs are not about special syntax.  Type checking with ATs doesn't not
use improvement, but rather a rewrite system on type terms interleaved
with unification.  This leads to similar effects, but seems to have
slightly different properties.

Actually, I think, much of our disagreement is due to a different idea
of the purpose of a language standard.  You appear to be happy to
standardise a feature that may be replaced in the next standard.  (At
least, both of the choices you propose in your email include
deprecating a feature in Haskell''.)  I don't see that as an acceptable
solution.  A standard is about something that stays.  That people can
rely on.  IMHO if we consider deprecating a feature in Haskell'' again,
we should not include it in Haskell', but leave it as an optional extra
that some systems may experimentally implement and some may not.

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Re[2]: the MPTC Dilemma (please solve)

2006-03-20 Thread Manuel M T Chakravarty
Ross Paterson:
 On Sun, Mar 19, 2006 at 11:25:44AM -0500, Manuel M T Chakravarty wrote:
  My statement remains:  Why use a relational notation if you can have a
  functional one?
 
 I agree that functions on static data are more attractive than logic
 programming at the type level.  But with associated type synonyms,
 the type level is not a functional language but a functional-logic one.

Your are right, of course.  

Hand-waving-alert
However, the evaluation model is what is known as residuation
in the FL community, which is essentially functional programming
with logic variables and lenient evaluation (a la Id).  As long
as we only have strongly normalising functions, lenient
evaluation and lazy evaluation coincide.  So, for Haskell
programmers, we are on familiar ground.
/Hand-waving-alert

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Strict tuples

2006-03-20 Thread Manuel M T Chakravarty
Sebastian Sylvan:
 On 3/19/06, Manuel M T Chakravarty [EMAIL PROTECTED] wrote:
  Loosely related to Ticket #76 (Bang Patterns) is the question of whether
  we want the language to include strict tuples.  It is related to bang
  patterns, because its sole motivation is to simplify enforcing
  strictness for some computations.  Its about empowering the programmer
  to choose between laziness and strictness where they deem that necessary
  without forcing them to completely re-arrange sub-expressions (as seq
  does).
 
  So what are strict tupples?  If a lazy pair is defined in pseudo code as
 
data (a, b) = (a, b)
 
  a strict pair would be defined as
 
data (!a, b!) = ( !a, !b )
 
  Ie, a strict tuple is enclosed by bang parenthesis (! ... !).  The use
  of the ! on the rhs are just the already standard strict data type
  fields.
 
 
 Maybe I've missed something here. But is there really any reasonable
 usage cases for something like:
 
 f !(a,b) = a + b
 
 in the current bang patterns proposal?
 
 I mean, would anyone really ever want an explicitly strict (i.e. using
 extra syntax) tuple with lazy elements?
 
 Couldn't the syntax for strict tuples be just what I wrote above
 (instead of adding weird-looking exclamation parenthesis).
 
 I'm pretty sure that most programmers who would write f !(a,b) = ...
 would expect the tuple's elements to be forced (they wouldn't expect
 it to do nothing, at least).. In fact !(x:xs) should mean (intuitively
 to me, at least) force x, and xs, meaning that the element x is
 forced, and the list xs is forced (but not the elements of the xs).
 
 Couldn't this be generalised? A pattern match on any constructor with
 a bang in front of it will force all the parts of the constructor
 (with seq)?

The point about strict tuples is not that the components are forced on
pattern matching (that's indeed what bang patterns are for).  The point
about strict tuples is that the components are forced *before* the tuple
is *constructed*.  It's really exactly the same as with strict fields in
data type declarations today.  So, yes, I can just define my own

  data MyStrictPair a b = MyStrictPair !a !b

and use that.  My point is simply that strict tuples are a particularly
useful form of strict data types, so

  * they should be pre-defined in the Prelude and
  * they should inherit the special syntax of tuples.

So, this is not so much a language feature as a library issue.

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Re[2]: the MPTC Dilemma (please solve)

2006-03-19 Thread Manuel M T Chakravarty
Bulat Ziganshin:
 Hello Lennart,
 
 Sunday, March 19, 2006, 4:05:03 AM, you wrote:
 
 LA I have to agree with Manuel.  I write a lot of Haskell code.
 LA People even pay me to do it.  I usually stay with Haskell-98,
 
 when i wrote application code, i also don't used extensions very much,
 i even don't used Haskell-98 very much and afair don't defined any type
 classes at all
 
 but when i gone to writing libraries, that can be used by everyone,
 type classes and requirement in even more extensions is what i need
 permanently. in particular, i try to write library so what any
 conception (stream, Binary, reference, array) can be used in any monad
 and that immediately leads me to using MPTC+FD. moreover, problems with
 resolving class overloading and other deficiencies of current
 unofficial Hugs/GHC standard are permanently strikes me

Libraries are a means, not an end.  The fact still remains.  Most
programs don't need MPTCs in an essential way.  It's convenient to have
them, but they are not essential.

 i had a class which defines default reference type for monads:
 
 class Ref m r | m-r where
   newRef :: a - m (r a)
   readRef :: r a - m a
   writeRef :: r a - a - m ()
 
 instance Ref IO IORef where ...
 instance Ref (ST s) (STRef s) where ...
 
 
 this class allows to write monad-independent code, for example:
 
 doit = do x - newRef 0
   a - readRef x
   writeRef x (a+1)
   readRef x
 
 can be runned in IO or ST monads, or in any monad derived from IO/ST
 (with appropriate instance Ref definitions). As you can see, even such
 small example require using FDs.
[..]

 this will allow to rewrite my Ref class as:
 
 type Ref IO = IORef
  Ref (ST s) = STRef s

You are going in the right direction, but what you want here are
associated types; i.e., you want type functions that are open and can be
extended (in the same way as classes can be extended by adding new
instances).  Your example reads as follows with associated types:

  class Monad m = RefMonad m where
type Ref m :: * - *
newRef :: a - m (Ref m a)
readRef :: Ref m a - m a
writeRef :: Ref m a - a - m ()

  instance RefMonad IO where 
type Ref IO = IORef
...
  instance RefMonad (ST s) where
type Ref (ST s) = STRef s
...

My statement remains:  Why use a relational notation if you can have a
functional one?  (The RefMonad class is btw very similar to a functor of
ML's module system.[*])  See 

  http://hackage.haskell.org/trac/haskell-prime/wiki/AssociatedTypes

for more details on associated types.

Manuel

[*] Cf http://www.informatik.uni-freiburg.de/~wehr/diplom/

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: the MPTC Dilemma (please solve)

2006-03-18 Thread Manuel M T Chakravarty
Claus Reinke:
 however, the underlying problem is not limited to MPTCs, and FDs 
 are not the only attempt to tackle the problem. so I agree with Isaac: 
 getting a handle on this issue is imperative for Haskell', because it will 
 be the only way forward when trying to standardize at least those of 
 the many extensions that have been around longer than the previous 
 standard Haskell 98. and if Haskell' fails to do this, it fails.

Please keep things in perspective:

(A) It's not as if every interesting program (or even the majority of
interesting programs) use(s) MPTCs.

(B) I don't think the time for which an extension has been around is
particularly relevant.  One of the big selling points of Haskell is that
it's quite well defined, and hence, its semantics is fairly well
understood and sane - sure, there are dark corners, but compared to
other languages of the same size, we are in good shape.  If we include
half-baked features, we weaken the standard.

In fact, it's quite worrying that FDs have been around for so long and
still resisted a thorough understanding.

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: the MPTC Dilemma (please solve)

2006-03-18 Thread Manuel M T Chakravarty
Isaac Jones:
 I'm forwarding an email that Martin Sulzmann asked me to post on his
 behalf.
 
 
 From: Martin Sulzmann [EMAIL PROTECTED]
 Subject: MPTC/FD dilemma 
 
 - ATs (associated types) will pose the same challenges.
   That is, type inference relies on dynamic termination checks.

Can you give an example?

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Re[2]: [Haskell-cafe] STUArray

2006-03-18 Thread Manuel M T Chakravarty
Bulat Ziganshin:
 Hello Chris,
 
 Sunday, March 12, 2006, 2:05:09 PM, you wrote:
 
 CK Is GHC.PArr documented?
 
 it's perfectly documented in module sources itself :) you can also
 look at the ndpFlatten directory in ghc compiler's sources. i've
 successfully used them in my program, of course this makes program
 faster but only-GHC compatible. so i plan to document it on
 http://haskell.org/haskellwiki/Arrays and incorporate it in
 Data.Array.* infrastructure so that strict arrays will be emulated
 under Hugs.
 
 CK The -fparr option is not in the 6.4.1 User's Guide.
 
 they just forgot to do this :)  btw, strict arrays will be a good
 candidate for Haskell-prime library standard

No, it's not an oversight.  The implementation is not complete.  Just
last week, we have started a second go at a complete implementation of
fast arrays.  Don't hold your breath though, it's a lot of work.

The interface of GHC.PArr will not change very much, but it'll use type
classes and associated types in a very essential way.  Underneath, a lot
of GHC specific will be used to optimise code, such as rewrite rules.
So, I don't think this is suitable for Haskell'.

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Strict tuples

2006-03-18 Thread Manuel M T Chakravarty
Loosely related to Ticket #76 (Bang Patterns) is the question of whether
we want the language to include strict tuples.  It is related to bang
patterns, because its sole motivation is to simplify enforcing
strictness for some computations.  Its about empowering the programmer
to choose between laziness and strictness where they deem that necessary
without forcing them to completely re-arrange sub-expressions (as seq
does).

So what are strict tupples?  If a lazy pair is defined in pseudo code as

  data (a, b) = (a, b)

a strict pair would be defined as

  data (!a, b!) = ( !a, !b )

Ie, a strict tuple is enclosed by bang parenthesis (! ... !).  The use
of the ! on the rhs are just the already standard strict data type
fields.

Why strict tuples, but not strict lists and strict Maybe and so on?
Tuples are the Haskell choice of returning more than one result from a
function.  So, if I write

  add x y = x + y

the caller gets an evaluated result.  However, if I write

  addmul x y = (x + y, x * y)

the caller gets a pair of two unevaluated results.  Even with bang
patterns, I still have to write

  addmul x y = let !s = x + y; !p = x * y in (s, p)

to have both results evaluated.  With strict tuples

  addmul x y = (!x + y, x * y!)

suffices.

Of course, the caller could invoke addmul using a bang patterns, as in

  let ( !s, !p ) = addmul x y
  in ...

but that's quite different to statically knowing (from the type) that
the two results of addmul will already be evaluated.  The latter leaves
room for more optimisations.

Syntax issues
~
* In Haskell (,) is the pair constructor.  What should be use for 
  strict tuples?  (!,!) ?
* With strict tuples (! and !) would become some sort of 
  reserved/special symbol.  That interferes with bang patterns, as 
  (!x, y!) would be tokenized as (! x , y !).  We could use ( ... !) 
  for strict tuples to avoid that conflict, or just requires that the 
  user write ( !x, !y ) when they want a bang pattern.  (Just like you 
  cannot write `Just.x' to mean `Just . x' as the former will always be 
  read as a qualified name and not the application of function 
  composition.

Bang patterns enable the programmer (among other things) to define
functions with strict arguments.  Strict tuples enable to define strict
results.

Manuel

PS: IIRC Clean supports strict tuples.


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: ClassMethodTypes

2006-02-01 Thread Manuel M T Chakravarty
Ross Paterson:
 As I read it, the POPL'05 paper cited by the wiki page asserts that
 there is a problem, but does not explain what it is.  Is there a better
 reference?

I just added a slightly more detailed explanation as a subpage to
ClassMethodTypes.

Manuel


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime