Volunteer for a panel at HIW 2019

2019-07-26 Thread Iavor Diatchki
Hello,

I am sending this on behalf of Nikki Vazou, who is organizing this
year's HIW---she is looking for a volunteer to represent Haskell' on a
panel, but the haskell-prime list is restricted to only members.
Details are in her message below.

If you are interested, please respond directly to her.

Cheers,
-Iavor

PS: Niki sorry for the multiple messages, my message also bounced
because apparently I used the wrong e-mail address.


>
>
> -- Forwarded message --
> From: Niki Vazou 
> To: haskell-prime@haskell.org
> Cc:
> Bcc:
> Date: Fri, 26 Jul 2019 14:13:36 +0200
> Subject: Haskell Implementors Working: Looking for Panelists
> Hello,
>
> Haskell Implementors Workshop 2019 will be having a 45 min panel discussion 
> (17.15-18.00 on Fri 23th August).
>
> The goal is to have panelists that represent the following committees:
> - ghc steering committee
> - Devops committee
> - core libraries committee
> - Haskell' committee
> - Haskell.org committee
> - Haskell Symposium & HiW steering committee
>
> Let me know if you
> - want like to be in the panel
> - if there are issues you want to raise in the panel discussion.
>
> Best,
> Niki
> Chair of HiW'19
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: Default module header `module Main where`

2017-05-16 Thread Iavor Diatchki
One potential difference between the two is that the current behavior
allows the `Main` module to import `main` from another module, while the
new behavior would fail in that case.

For example, a file that has only a single line:

import SomeOtherModule(main)

This still seems like a fairly unusual corner case (with an obvious work
around), so I don't think it matters much, but I thought I'd mention it so
that we are aware of it.






On Tue, May 16, 2017 at 7:18 AM, Joachim Breitner 
wrote:

> Hi,
>
> a very small proposal to be considered for Haskell':
>
> Currently, the report states
>
> An abbreviated form of module, consisting only of the module body,
> is permitted. If this is used, the header is assumed to be ‘module
> Main(main) where’.
>
> I propose to change that to
>
> An abbreviated form of module, consisting only of the module body,
> is permitted. If this is used, the header is assumed to be ‘module
> Main where’.
>
> The rationale is that a main-less main module is still useful, e.g.
> when you are working a lot in GHCi, and offload a few extensions to a
> separate file. Currently, tools like hdevtools will complain about a
> missing main function when editing such a file.
>
> It would also work better with GHC’s -main-is flag, and avoid problems
> like the one described in https://ghc.haskell.org/trac/ghc/ticket/13704
>
>
> I don’t see any downsides. When compiling to a binary, implementations
> are still able to detect that a Main module is not imported by any
> other module and only the main function is used, and optimize as if
> only main were exported.
>
> Greetings,
> Joachim
>
>
>
> --
> Joachim “nomeata” Breitner
>   m...@joachim-breitner.de • https://www.joachim-breitner.de/
>   XMPP: nome...@joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F
>   Debian Developer: nome...@debian.org
> ___
> Haskell-prime mailing list
> Haskell-prime@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
>
>
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: Default module header `module Main where`

2017-05-16 Thread Iavor Diatchki
That seems fairly reasonable to me.

-Iavor

On Tue, May 16, 2017 at 7:18 AM, Joachim Breitner 
wrote:

> Hi,
>
> a very small proposal to be considered for Haskell':
>
> Currently, the report states
>
> An abbreviated form of module, consisting only of the module body,
> is permitted. If this is used, the header is assumed to be ‘module
> Main(main) where’.
>
> I propose to change that to
>
> An abbreviated form of module, consisting only of the module body,
> is permitted. If this is used, the header is assumed to be ‘module
> Main where’.
>
> The rationale is that a main-less main module is still useful, e.g.
> when you are working a lot in GHCi, and offload a few extensions to a
> separate file. Currently, tools like hdevtools will complain about a
> missing main function when editing such a file.
>
> It would also work better with GHC’s -main-is flag, and avoid problems
> like the one described in https://ghc.haskell.org/trac/ghc/ticket/13704
>
>
> I don’t see any downsides. When compiling to a binary, implementations
> are still able to detect that a Main module is not imported by any
> other module and only the main function is used, and optimize as if
> only main were exported.
>
> Greetings,
> Joachim
>
>
>
> --
> Joachim “nomeata” Breitner
>   m...@joachim-breitner.de • https://www.joachim-breitner.de/
>   XMPP: nome...@joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F
>   Debian Developer: nome...@debian.org
> ___
> Haskell-prime mailing list
> Haskell-prime@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
>
>
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: GitHub proposal repo permissions

2016-10-19 Thread Iavor Diatchki
Hi, I think Herbert added me to the correct group, thanks!

On Tue, Oct 18, 2016 at 10:24 AM, David Luposchainsky via Haskell-prime <
haskell-prime@haskell.org> wrote:

> On 12.10.2016 19:09, Iavor Diatchki wrote:
> > could someone with access fix it, maybe David
>
> I’m just a regular Haskell member (I think), Herbert gave me the access
> rights
> and I didn’t run into any problems yet. Now that I’m back from my holidays
> I
> guess I’m a bit late to answering your issue.
>
> Anyway, I looked at the RFCS settings, and it seems like I could have
> added you.
> Is the issue resolved, or should I give you permissions?
>
> Greetings,
> David
>
> --
> My GPG keys: https://keybase.io/quchen
> ___
> Haskell-prime mailing list
> Haskell-prime@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
>
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


GitHub proposal repo permissions

2016-10-12 Thread Iavor Diatchki
Hello,

I was just trying to update the `Haskel 2020` project as it is not in sync
with the actual pull-requests, bit I can't see a way to do it.  Am I
missing something, or do I simply not have the required permissions?

If this is indeed a permissions issue, could someone with access fix it
(maybe David?).   Probably the easiest would be to create a group with all
the members and give it access to the repo.

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Proposal: accept tuple sections

2016-10-12 Thread Iavor Diatchki
Hello,

it seems that there isn't much controversy over the TupleSections propsal,
so I'd like to move the we accept it for the next language standard.

Does anyone have any objections?

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Process question

2016-10-04 Thread Iavor Diatchki
Hello,

Now that we've started with a few proposal, I am realizing that I have no
idea how to proceed from here.  In particular:

1. How would I request I proposal to be rejected
2. How would I request that a proposal be accepted

Ideas?

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: Step-by-step guide for creating a new proposal

2016-10-04 Thread Iavor Diatchki
OK, I put a section at the top saying that, and then summarizing the
process for people who are familiar with the tools.   I also updated the
last list to say that you should add a link to the rendered version and how
to do it.





On Tue, Oct 4, 2016 at 8:40 AM, David Luposchainsky via Haskell-prime <
haskell-prime@haskell.org> wrote:

> On 04.10.2016 01:27, Iavor Diatchki wrote:
> > During our Haskell Prime lunch meeting at ICFP, I promised to create a
> detailed
> > step-by-step guide for creating Haskell Prime proposals on GitHub.  The
> > instructions are now available here:
> >
> >  https://github.com/yav/rfcs/blob/instructions/step-by-
> step-instructions.md
> >
> > Please have a look and let me know if something is unclear, or if I
> misunderstood
> > something about the process.
>
> The target audience for this document is someone who is unfamiliar with
> Git and
> Github, which we should make clear at the beginning. As an experienced
> user, it
> left me searching for relevant information among all those sub-lists to
> find out
> that it really just is about opening a pull request containing a template.
> We
> might provide a link to the document in the process section [1] of the
> current
> README if others think this amount of detail helps lowering the barrier of
> entry.
>
> One thing we should also mention somewhere is to please provide a link to
> the
> rendered version of the proposal in the ticket, because Git diffs are in a
> very
> reader-unfriendly format.
>
> Greetings,
> David
>
> [1]: https://github.com/yav/rfcs/tree/instructions#proposal-process
>
>
> --
> My GPG keys: https://keybase.io/quchen
> ___
> Haskell-prime mailing list
> Haskell-prime@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
>
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Step-by-step guide for creating a new proposal

2016-10-03 Thread Iavor Diatchki
Hello,

During our Haskell Prime lunch meeting at ICFP, I promised to create a
detailed step-by-step guide for creating Haskell Prime proposals on
GitHub.  The instructions are now available here:

 https://github.com/yav/rfcs/blob/instructions/step-by-step-instructions.md

Please have a look and let me know if something is unclear, or if I
misunderstood something about the process.

Cheers,
-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: minutes from committee meeting at ICFP

2016-10-02 Thread Iavor Diatchki
Hello,

I just got back to the US, and have started uploading videos in earnest.
Hopefully, I'll get to the Haskell Symposium pretty soon, and the whole
discussion was recorded so that everyone can listen to it.   I started
taking notes at the beginning of the discussion, but then got distracted,
so my notes didn't end up terribly useful.

-Iavor





On Tue, Sep 27, 2016 at 10:10 AM, Richard Eisenberg 
wrote:

> I recall that Iavor took notes from the podium. Iavor?
>
> > On Sep 27, 2016, at 12:58 PM, Ben Gamari  wrote:
> >
> > Richard Eisenberg  writes:
> >
> >> Below are the minutes from last week’s in-person meeting at ICFP among
> >> the attending members of the Haskell Prime committee. The conversation
> >> moved swiftly, and I’ve done my best at capturing the essence of
> >> attendees’ comments. The attendees have had a week to consider these
> >> notes with no suggestions submitted; I thus consider these notes
> >> ratified.
> >>
> > Did anyone take notes from Iavor's Status of Haskell discussion during
> > the Symposium? There were a few points brought up there that shouldn't
> > be forgotten.
> >
> > Cheers,
> >
> > - Ben
> >
>
>
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: Limber separators

2016-05-12 Thread Iavor Diatchki
On Sat, May 7, 2016 at 1:44 AM, Jon Fairbairn 
wrote:

>
>
> The one this violates is “never make language design decisions
> to work around deficiencies in tools” The problem is that diff
> does its work in ignorance of the syntax and consequently
> produces poor results.
>
>
I think that this is an excellent principle that we should uphold.
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: Scope of committee (can we do *new* things?)

2016-05-12 Thread Iavor Diatchki
I disagree that we should be standardizing language features that have not
been implemented.

I think having an implementation is important because:
   1. the act of implementing a feature forces you to work out details that
you may not have thought of ahead of time.  For example, for a small
syntactic extension, the implementation would have to work out how to fit
it in the grammar, and how to present the new feature in, say, error
messages.
   2. having an implementation allows users to try out the extension and
gain some empirical evidence that the extension is actually useful in
practice (this is hard to quantify, I know, but it is even harder if you
can't even use the extension at all).

If some feature ends up being particularly useful, it could always be
standardized in the next iteration of the language, when we've gained some
experience using it in practice.

-Iavor



On Wed, May 11, 2016 at 11:17 AM, John Wiegley 
wrote:

> > Gershom B  writes:
>
> > While such changes should definitely be in scope, I do think that the
> proper
> > mechanism would be to garner enough interest to get a patch into GHC
> > (whether through discussion on the -prime list or elsewhere) and have an
> > experimental implementation, even for syntax changes, before such
> proposals
> > are considered ready for acceptance into a standard as such.
>
> Just a side note: This is often how the C++ committee proceeds as well: a
> language proposal with an experimental implementation is given much higher
> credence than paperware. However, they don't exclude paperware either.
>
> So I don't think we need to rely on implementation before considering a
> feature we all want, but I do agree that seeing a patch in GHC first allows
> for much testing and experimentation.
>
> --
> John Wiegley  GPG fingerprint = 4710 CF98 AF9B 327B B80F
> http://newartisans.com  60E1 46C4 BD1A 7AC1 4BA2
> ___
> Haskell-prime mailing list
> Haskell-prime@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
>
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: The GADT debate

2016-05-08 Thread Iavor Diatchki
Hello,

what is the state with the semantic specification of GADTs?  I am wondering
if they fit in the usual CPO-style semantics for Haskell, or do we need
some more exotic mathematical structure to give semantics to the language.

-Iavor




On Sun, May 8, 2016 at 8:36 AM, Carter Schonwald  wrote:

>
>
> On Sunday, May 8, 2016, Richard Eisenberg  wrote:
>
>>
>> On May 7, 2016, at 11:05 PM, Gershom B  wrote:
>> >
>> > an attempt (orthogonal to the prime committee at first) to specify an
>> algorithm for inference that is easier to describe and implement than
>> OutsideIn, and which is strictly less powerful. (And indeed whose
>> specification can be given in a page or two of the report).
>>
>> Stephanie Weirich and I indeed considered doing such a thing, which
>> conversation was inspired by my joining the Haskell Prime committee. We
>> concluded that this would indeed be a research question that is, as yet,
>> unanswered in the literature. The best we could come up with based on
>> current knowledge would be to require type signatures on:
>>
>> 1. The scrutinee
>> 2. Every case branch
>> 3. The case as a whole
>>
>> With all of these type signatures in place, GADT type inference is indeed
>> straightforward. As a strawman, I would be open to standardizing GADTs with
>> these requirements in place and the caveat that implementations are welcome
>> to require fewer signatures. Even better would be to do this research and
>> come up with a good answer. I may very well be open to doing such a
>> research project, but not for at least a year. (My research agenda for the
>> next year seems fairly solid at this point.) If someone else wants to
>> spearhead and wants advice / a sounding board / a less active co-author, I
>> may be willing to join.
>>
>> Richard
>
>
>
> This sounds awesome.  One question I'm wondering about is what parts of
> the type inference problem aren't part of the same challenge in equivalent
> dependent data types? I've been doing some Intersting stuff on the latter
> front recently.
>
> It seems that those two are closely related, but I guess there might be
> some complications in Haskell semantics for thst? And or is this alluded to
> in existing work on gadts?
>
>
>
>> ___
>> Haskell-prime mailing list
>> Haskell-prime@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
>>
>
> ___
> Haskell-prime mailing list
> Haskell-prime@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
>
>
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: Are there GHC extensions we'd like to incorporate wholesale?

2016-05-03 Thread Iavor Diatchki
Hello,

I think it'd be great to get started by specifying a few simple extensions,
such as the ones Lennart listed.  Even though they are very well
understood, and we have text about them in the GHC manual, we'd still have
to think of how to integrate their descriptions with the rest of the
report, and actually write the text.

I'd be happy to start working on any of these as soon as we've decided what
tools we'd like to use---I didn't follow the other thread closely, and I'd
be happy to use whatever people are comfortable with (as long as it works
on Linux, but I can't imagine that would be an issue).

-Iavor



On Tue, May 3, 2016 at 1:57 AM, Augustsson, Lennart <
lennart.augusts...@sc.com> wrote:

> Then I suggest we keep EmptyDataDecls!
>
> -Original Message-
> From: Herbert Valerio Riedel [mailto:hvrie...@gmail.com]
> Sent: 03 May 2016 09:50
> To: Augustsson, Lennart
> Cc: John Wiegley; haskell-prime@haskell.org
> Subject: Re: Are there GHC extensions we'd like to incorporate wholesale?
>
> On 2016-05-03 at 10:36:31 +0200, Augustsson, Lennart wrote:
>
> > I'd say there are extensions we should just adopt wholesale, but they
> > are all of a very simple syntactic kind.
> >
> > E.g., EmptyDataDecls
>
> Btw, I have no idea why EmptyDataDecls keeps getting proposed for
> inclusion in the next report: `EmptyDataDecls` is one of the extensions
> that went into Haskell2010! :-)
>
> To quote Haskell2010, Ch. 12:
>
> | Those implementations are also encouraged to support the following
> | named language features:
> |
> | - PatternGuards,
> | - NoNPlusKPatterns,
> | - RelaxedPolyRec,
> | - EmptyDataDecls,
> | - ForeignFunctionInterface
> |
> | These are the named language extensions supported by some pre-Haskell
> | 2010 implementations, that have been integrated into this report.
>
>
> There's actually a few more deltas to H98 (like e.g. DoAndIfThenElse), so
>
>  {-# LANGUAGE Haskell2010 #-}
>
> is not equivalent to
>
>  {-# LANGUAGE Haskell98,
>   PatternGuards, NoNPlusKPatterns, RelaxedPolyRec,
>   EmptyDataDecls, ForeignFunctionInterface #-}
>
> But surprisingly this kind of information seems hard to find in the
> H2010 report.
>
> This email and any attachments are confidential and may also be
> privileged. If you are not the intended recipient, please delete all copies
> and notify the sender immediately. You may wish to refer to the
> incorporation details of Standard Chartered PLC, Standard Chartered Bank
> and their subsidiaries at
> http://www.standardchartered.com/en/incorporation-details.html
>
> Insofar as this communication contains any market commentary, the market
> commentary has been prepared by sales and/or trading desk of Standard
> Chartered Bank or its affiliate. It is not and does not constitute research
> material, independent research, recommendation or financial advice. Any
> market commentary is for information purpose only and shall not be relied
> for any other purpose, and is subject to the relevant disclaimers available
> at
> http://wholesalebanking.standardchartered.com/en/utility/Pages/d-mkt.aspx
>
> Insofar as this e-mail contains the term sheet for a proposed transaction,
> by responding affirmatively to this e-mail, you agree that you have
> understood the terms and conditions in the attached term sheet and
> evaluated the merits and risks of the transaction. We may at times also
> request you to sign on the term sheet to acknowledge in respect of the same.
>
> Please visit
> http://wholesalebanking.standardchartered.com/en/capabilities/financialmarkets/Pages/doddfrankdisclosures.aspx
> for important information with respect to derivative products.
> ___
> Haskell-prime mailing list
> Haskell-prime@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
>
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: relaxing instance declarations

2013-04-29 Thread Iavor Diatchki
Hello,

I think that if we want something along those lines, we should consider a
more general construct that allows declarations to scope over other
declarations (like SML's `local` construct).  It would be quite arbitrary
to restrict this only to instances.

-Iavor



On Mon, Apr 29, 2013 at 2:41 PM, Max Bolingbroke batterseapo...@hotmail.com
 wrote:

 You could probably get away with just using two where clauses:

 instance Foo a where
 bar = ...
   where
 auxilliary = ...




 On 28 April 2013 18:42, Edward Kmett ekm...@gmail.com wrote:

 Makes sense. I'm not sure what a good syntactic story would be for that
 feature though. Just writing down member names that aren't in the class
 seems to be too brittle and error prone, and new keywords seems uglier than
 the current situation.

 Sent from my iPad

 On Apr 28, 2013, at 1:24 PM, Doug McIlroy d...@cs.dartmouth.edu wrote:

  Not always. For example, you can't mess with the declaration
  of a standard class, such as Num.
 
  On Sun, Apr 28, 2013 at 12:06 PM, Edward Kmett ekm...@gmail.com
 wrote:
 
  You can always put those helper functions in the class and then just
 not
  export them from the module.
 
  On Sun, Apr 28, 2013 at 10:49 AM, Doug McIlroy d...@cs.dartmouth.edu
 wrote:
 
  Is there any strong reason why the where clause in an instance
  declaration cannot declare anything other than class
  operators? If not, I suggest relaxing the restriction.
 
  It is not unusual for declarations of class operators to
  refer to special auxiliary functions. Under current rules
  such functions have to be declared outside the scope in
  which they are used.
 
  Doug McIlroy

 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime



 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: What is a punctuation character?

2012-03-20 Thread Iavor Diatchki
Hello,

So I looked at what GHC does with Unicode and to me it is seems quite
reasonable:

* The alphabet is Unicode code points, so a valid Haskell program is
simply a list of those.
* Combining characters are not allowed in identifiers, so no need for
complex normalization rules: programs should always use the short
version of a character, or be rejected.
* Combining characters may appear in string literals, and there they
are left as is without any modification (so some string literals may
be longer than what's displayed in a text editor.)

Perhaps this is simply what the report already states (I haven't
checked, for which I apologize) but, if not, perhaps we should clarify
things.

-Iavor
PS:  I don't think that there is any need to specify a particular
representation for the unicode code-points (e.g., utf-8 etc.) in the
language standard.





On Fri, Mar 16, 2012 at 6:23 PM, Iavor Diatchki
iavor.diatc...@gmail.com wrote:
 Hello,
 I am also not an expert but I got curious and did a bit of Wikipedia
 reading.  Based on what I understood, here are two (related) questions
 that it might be nice to clarify in a future version of the report:

 1. What is the alphabet used by the grammar in the Haskell report?  My
 understanding is that the intention is that the alphabet is unicode
 codepoints (sometimes referred to as unicode characters).  There is no
 way to refer to specific code-points by escaping as in Java (the link
 that Gaby shared), you just have to write the code-points directly
 (and there are plenty of encodings for doing that, e.g. UTF-8 etc.)

 2. Do we respect unicode equivalence
 (http://en.wikipedia.org/wiki/Canonical_equivalence) in Haskell source
 code.  The issue here is that, apparently, some sequences of unicode
 code points/characters are supposed to be morally the same.  For
 example, it would appear that there are two different ways to write
 the Spanish letter ñ: it has its own number, but it can also be made
 by writing n followed by a modifier to put the wavy sign on top.

 I would guess that implementing unicode equivalence  would not be
 too hard---supposedly the unicode standard specifies a text
 normalization procedure.  However, this would complicate the report
 specification, because now the alphabet becomes not just unicode
 code-points, but equivalence classes of code points.

 Thoughts?

 -Iavor






 On Fri, Mar 16, 2012 at 4:49 PM, Ian Lynagh ig...@earth.li wrote:

 Hi Gaby,

 On Fri, Mar 16, 2012 at 06:29:24PM -0500, Gabriel Dos Reis wrote:

 OK, thanks!  I guess a take away from this discussion is that what
 is a punctuation is far less well defined than it appears...

 I'm not really sure what you're asking. Haskell's uniSymbol includes all
 Unicode characters (should that be codepoints? I'm not a Unicode expert)
 in the punctuation category; I'm not sure what the best reference is,
 but e.g. table 12 in
    http://www.unicode.org/reports/tr44/tr44-8.html#Property_Values
 lists a number of Px categories, and a meta-category P Punctuation.


 Thanks
 Ian


 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: What is a punctuation character?

2012-03-16 Thread Iavor Diatchki
Hello,
I am also not an expert but I got curious and did a bit of Wikipedia
reading.  Based on what I understood, here are two (related) questions
that it might be nice to clarify in a future version of the report:

1. What is the alphabet used by the grammar in the Haskell report?  My
understanding is that the intention is that the alphabet is unicode
codepoints (sometimes referred to as unicode characters).  There is no
way to refer to specific code-points by escaping as in Java (the link
that Gaby shared), you just have to write the code-points directly
(and there are plenty of encodings for doing that, e.g. UTF-8 etc.)

2. Do we respect unicode equivalence
(http://en.wikipedia.org/wiki/Canonical_equivalence) in Haskell source
code.  The issue here is that, apparently, some sequences of unicode
code points/characters are supposed to be morally the same.  For
example, it would appear that there are two different ways to write
the Spanish letter ñ: it has its own number, but it can also be made
by writing n followed by a modifier to put the wavy sign on top.

I would guess that implementing unicode equivalence  would not be
too hard---supposedly the unicode standard specifies a text
normalization procedure.  However, this would complicate the report
specification, because now the alphabet becomes not just unicode
code-points, but equivalence classes of code points.

Thoughts?

-Iavor






On Fri, Mar 16, 2012 at 4:49 PM, Ian Lynagh ig...@earth.li wrote:

 Hi Gaby,

 On Fri, Mar 16, 2012 at 06:29:24PM -0500, Gabriel Dos Reis wrote:

 OK, thanks!  I guess a take away from this discussion is that what
 is a punctuation is far less well defined than it appears...

 I'm not really sure what you're asking. Haskell's uniSymbol includes all
 Unicode characters (should that be codepoints? I'm not a Unicode expert)
 in the punctuation category; I'm not sure what the best reference is,
 but e.g. table 12 in
    http://www.unicode.org/reports/tr44/tr44-8.html#Property_Values
 lists a number of Px categories, and a meta-category P Punctuation.


 Thanks
 Ian


 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: FW: 7.4.1-pre: Show Integral

2011-12-24 Thread Iavor Diatchki
Hello,

The discussion on the libraries list is archived here:
http://www.haskell.org/pipermail/libraries/2011-September/016699.html

There hasn't been a corresponding discussion for Haskell Prime so,
technically, GHC deviates from the standard.

-Iavor


On Fri, Dec 23, 2011 at 9:41 AM, Simon Peyton-Jones
simo...@microsoft.com wrote:
 I'm confused too.  I'd welcome clarification from the Haskell Prime folk.

 S

 -Original Message-
 From: Serge D. Mechveliani [mailto:mech...@botik.ru]
 Sent: 23 December 2011 17:36
 To: Simon Peyton-Jones
 Subject: Re: 7.4.1-pre: Show  Integral

 On Thu, Dec 22, 2011 at 08:14:54PM +, Simon Peyton-Jones wrote:
 |  2011/12/22 Edward Kmett ekm...@gmail.com:
 |   The change, however, was a deliberate _break_ with the standard that
 |   passed through the library review process a few months ago, and is now
 |   making its way out into the wild.
 |
 |  Is it reasonable to enquire how many standard-compliant implementations
 |  of Haskell there are?

 Just to be clear, the change IS the standard.  GHC has to change to be 
 compliant.
 At least that's how I understand it.


 I am confused.
 I am looking now at the on-line specification of  Haskell-2010,
 6.3 Standard Haskell Classes.
 It shows that  Integral  includes  Show:

                           Eq     Show
                             \   /
                              Num
                              |
                       Enum  Real
                          \   |
                           Integral

 This is also visible in the further standard class declarations in this 
 chapter.

 Hence, for  `x :: Integral a = a'  it is correct to write  (shows x ).
 And  ghc-7.4.0.20111219  does not allow this.
 So,  ghc-7.4.0.20111219  breaks the 2010 standard. Now, Edward Kmett writes 
 that
 this break is done deliberately.

 Am I missing something?

 I witness this for the first time: that GHC deliberately breaks the current
 Haskell standard.
 Probably, many people (as myself) dislike this point of the standard.
 Well, they can write a dummy Show implementation for their type T:
      showsPrec _ _ = showString (t :: T),

 and wait for an improved standard, say, Haskell-II
 -- ?

 Regards,

 --
 Sergei
 mech...@botik.ru






 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: TypeFamilies vs. FunctionalDependencies type-level recursion

2011-08-07 Thread Iavor Diatchki
Hello,

On Tue, Aug 2, 2011 at 6:10 PM, Simon Peyton-Jones
simo...@microsoft.com wrote:
 Julien: we should start a wiki page (see 
 http://hackage.haskell.org/trac/ghc/wiki/Commentary, and look for the link to 
 Type level naturals; one like that).  On the wiki you should
  * add a link to the latest version of our (evolving) design document.
  * specify the branch in the repo that has the stuff
  * describe the status

Yes, this would be quite useful!

 Iavor's stuff is still highly relevant, because it involves a special-purpose 
 constraint solver.  But Iavor's stuff is no integrated into HEAD, and we need 
 to talk about how to do that, once you are back from holiday Iavor.

I'll send an e-mail to the list when I'm back.  I think I've made
quite a bit of progress on the solver, and I've been working on a
document (actually a literate Haskell file) which explains how it
works and also my understanding of GHC's constraint solver that I'd be
very happy to get some feedback on.

-Iavor

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: TypeFamilies vs. FunctionalDependencies type-level recursion

2011-07-30 Thread Iavor Diatchki
Helllo,

On Sat, Jul 30, 2011 at 2:11 AM, o...@okmij.org wrote:


 Second, what is the status of Nat kinds and other type-level data that
 Conor was/is working on? Nat kinds and optimized comparison of Nat
 kinds would be most welcome. Type level lists are better still
 (relieving us from Goedel-encoding type representations).


I  did some work on adding a Nat kind to GHC, you can find the
implementation in the type-nats branch of GHC.   The code there introduces
a new kind, Nat, and it allows you to write natural numbers in types, using
singleton types to link them to the value level.  The constraint solver for
the type level naturals in that implementation is a bit flaky, so lately I
have been working on an improved decision procedure.  When ready, I hope
that the new solver should support more operations, and it should be much
easier to make it construct explicit proof objects (e.g., in the style of
System FC).
-Iavor
PS: I am going on vacation next week, so I'll probably not make much
progress on the new solver in August.
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: TypeFamilies vs. FunctionalDependencies type-level recursion

2011-06-15 Thread Iavor Diatchki
Hello,

On Wed, Jun 15, 2011 at 10:49 AM, dm-list-haskell-pr...@scs.stanford.eduwrote:

 At Wed, 15 Jun 2011 10:10:14 -0700,
 Iavor Diatchki wrote:
 
  Hello,
 
  On Wed, Jun 15, 2011 at 12:25 AM, Simon Peyton-Jones 
 simo...@microsoft.com
  wrote:
 
  | class C a b | a - b where
  |   foo :: a - b
  |   foo = error Yo dawg.
  | 
  | instance C a b where
 
  Wait.  What about
 instance C [a] [b]
 
  No, those two are not different, the instance C [a] [b] should also be
  rejected because it violates the functional dependency.

 But now you are going to end up rejecting programs like this:

{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE FunctionalDependencies #-}
{-# LANGUAGE UndecidableInstances #-}

 class C a b | a - b
 class D a b | a - b
 instance (D a b) = C [a] [b]

 And a lot of useful code (including HList) depends on being able to do
 things like the above.


Nope, this program will not be rejected because b is in the FD closure of
a.  This stuff used to work  a few GHC releases back, and I think that
this is the algorithm used by Hugs.

A functional dependency on a class imposes a constraint on the valid class
instances  (in a similar fashion to adding super-class constraints to a
class).  In general, checking this invariant may be hard, so it is fine for
implementations to be incomplete (i.e., reject some programs that do
satisfy the invariant or, perhaps, fail to terminate in the process).  OTOH,
I think that if an implementation accepts a program that violates the
invariant, then this is a bug in the implementation.


  The general rule defining an FD on a class like C is the following
 logical
  statement:
  forall a b1 b2.  (C a b1, C a b2) = (b1 = b2)

 And in fact b1 and b2 are equal, up to alpha-conversion.  They are
 both just free type variables.


No, this was intended to be a more semantic property.  Here it is in
English:

For any three ground types a, b1, and b2, if we can prove that both C
a b1 and C a b2 hold, then b1 and b2 must be the same type.   The
theory of functional dependencies comes from databases.  In that context, a
class corresponds to the schema for a database table (i.e., what columns
there are, and with what types).   An instance corresponds to a rule that
adds row(s) to the table.   With this in mind, the rule that I wrote above
exactly corresponds to the notion of a functional dependency on a database
table.

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: In opposition of Functor as super-class of Monad

2011-01-05 Thread Iavor Diatchki
Hi,
indeed, this is called ap in Control.Monad.  So if we have an instance of
Monad, all that needs to be done to support the other instances is:

instance (SameContextAsTheMonadInstance) = Functor MyType where fmap =
liftM
instance (SameContextAsTheMonadInstance) = Applicative MyType where pure =
return; (*) = ap

Furthermore, this is only in the cases where we are defining the type from
scratch, and not using a library like monadLib or MTL, otherwise a simple
deriving is sufficient.

-Iavor

On Wed, Jan 5, 2011 at 12:29 PM, Tony Morris tonymor...@gmail.com wrote:

 On 06/01/11 04:58, Isaac Dupree wrote:
  Tony, you're missing the point... Alexey isn't making a complete patch
  to GHC/base libraries, just a hacky-looking demonstration.  Alexey is
  saying that in a class hierarchy (such as if Functor = Monad were a
  hierarchy, or for that matter XFunctor=XMonad or Eq = Ord), it
  is still possible to define the superclass functions (fmap) in terms
  of the subclass functions (return and =) (such as writing a functor
  instance in which fmap f m = m = (return . f)).  This has always
  been true in Haskell, it just might not have been obvious.
 
  ___
  Haskell-prime mailing list
  Haskell-prime@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-prime
 Oh right sorry. I thought a stronger point was being made.

 Then perhaps it's also worth pointing out that (*) can be written
 using (=) and return:
 f * a = f = \ff - a = \aa - return (ff aa)

 --
 Tony Morris
 http://tmorris.net/



 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: PROPOSAL: Include record puns in Haskell 2011

2010-02-26 Thread Iavor Diatchki
Hello,

In order to keep the discussion structured I have created two tickets
in the haskell-prime trac system
(http://hackage.haskell.org/trac/haskell-prime):
  * Proposal 1: Add pre-Haskell'98 style punning and record
disambiguation (ticket #136)
  * Proposal 2: Add record-wildcards (ticket #137)

I decided to split the two into separate tickets because, at least in
my mind, there are different things that we might discuss about the
two, and also they make sense independent of each other (although
record wildcards without punning might be a bit weird :-).

I think that both proposals are worth considering for Haskell 2011
because there are situations where they can significantly improve the
readability of code involving record manipulation.  I disagree with
the stylistic issues that were brought up in the discussion because I
do not believe that variable shadowing should be avoided at all
costs:  at least for me, avoiding shadowing is a means to an end
rather then an end in itself.  In the case of record puns, I think
that the clarity of the notation far surpasses any confusion that
might be introduced by the shadowing.  Furthermore, as other
participants in the discussion pointed out, the proposed features are
orthogonal to the rest of the language, so their use is entirely
optional.

-Iavor




On Fri, Feb 26, 2010 at 2:59 AM, Heinrich Apfelmus
apfel...@quantentunnel.de wrote:
 Simon Marlow wrote:
 While I agree with these points, I was converted to record punning
 (actually record wildcards) when I rewrote the GHC IO library.  Handle
 is a record with 12 or so fields, and there are literally dozens of
 functions that start like this:

   flushWriteBuffer :: Handle - IO ()
   flushWriteBuffer Handle{..} = do

 if I had to write out the field names I use each time, and even worse,
 think up names to bind to each of them, it would be hideous.

 What about using field names as functions?

    flushWriteBuffer h@(Handle {}) = do
        ... buffer h ...

 Of course, you always have to drag  h  around.


 Regards,
 Heinrich Apfelmus

 --
 http://apfelmus.nfshost.com

 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


PROPOSAL: Include record puns in Haskell 2011

2010-02-24 Thread Iavor Diatchki
Hello,
(Malcolm, sorry for the double post, I forgot to CC the list)
I was thinking mostly about the old-time-y punning, where I can
write a label, say theField, and it automatically gets expanded to
theField = theField, in record patterns and record constructing
expressions.

The only corner case that I can remember about this is the interaction
with qualified names, the issue being what happens if a label in a pun
is qualified?  I think that in such cases we should just used the
unqualified form for the variable associated with the label.  In
patterns, I can't think of any other sensible alternative.  In
expressions, I could imaging expanding A.theField to A.theField =
A.theField but it seems that this would almost never be what we want,
while in all the uses I've had A.theField = theField is what was
needed.

I think that this is exactly what GHC implements, at least based on
the following example:

module A where data T = C { f :: Int }

{-# LANGUAGE NamedFieldPuns #-}
module B where
import qualified A

testPattern (A.C { A.f }) = f
testExpr f                = A.C { A.f }

I imagine that this is fairly close to what was in Haskell 1.3?   As
far as wild-cards are concerned, I don't feel particularly strongly
about them either way (I can see some benefits and some drawbacks) so
I'd be happy to leave them for a separate proposal or add them to this
one, depending on how the rest of the community feels.

-Iavor





On Wed, Feb 24, 2010 at 1:35 AM, Malcolm Wallace
malcolm.wall...@cs.york.ac.uk wrote:
 I'd like to propose that we add record punning to Haskell 2011.

 Can you be more specific?  Do you propose to re-instate punning exactly as
 it was specified in Haskell 1.3?  Or do you propose in addition some of the
 newer extensions that have been recently implemented in ghc (but not other
 compilers), such as record wildcards?

 Regards,
    Malcolm

 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell] Nominations for the Haskell 2011 committee

2009-12-29 Thread Iavor Diatchki
Hello,
I would like to participate in the design of Haskell 2011.  I have
used Haskell for about 10 years, commercially at Galois Inc, for the
last 3.  I have a good understanding of all parts of the language and
various implementations, and I have a particular interest in its type
system and semantics, which is why I think that I would be able to
provide valuable input to the committee.
-Iavor

On Mon, Dec 14, 2009 at 4:34 AM, Simon Marlow marlo...@gmail.com wrote:
 So that the Haskell 2011 cycle can get underway, we are soliciting
 nominations for new committee members.  Since this is the first time we've
 done this, the procedure is still somewhat unsettled and things may yet
 change, but the current guidelines are written down here:

 http://hackage.haskell.org/trac/haskell-prime/wiki/Committee

 In particular, on the makeup of the commitee:

  The committee should represent each class of stakeholders with
  roughly equal weight. These classes are

    * Implementers (compiler/tool writers)
    * Commercial users
    * Non-commercial users (e.g. open source)
    * Academic users (using Haskell in research)
    * Teachers
    * Authors

  In addition, members of the committee should be long-standing users
  with a deep knowledge of Haskell, and preferably with experience of
  language design. The committee should contain at least some members
  with a comprehensive knowledge of the dark corners of the Haskell
  language design, who can offer perspective and rationale for existing
  choices and comment on the ramifications of making different choices.


 To nominate someone (which may be yourself), send a message to
 haskell-pr...@haskell.org.  Please give reasons for your nomination.

 The current committee will appoint new commitee members and editors starting
 in the new year, so the deadline for nominations is 31 December 2009.

 During discussion amongst the current commitee, we realised that the choice
 of committee should be informed not just by the criteria above, but also by
 the particular proposals that are expected to be under consideration during
 this cycle.  With that in mind, we plan that following the nominations the
 current committee will choose a core commitee of up to 10 members, and
 further members may be appointed during the year based on expertise needed
 to consider particular proposals.  Accordingly, now would be a good time to
 start discussing which proposals should be considered in the Haskell 2011
 timeframe, as that may affect the choice of commitee members.

 More details on the current Haskell Prime process are here:

 http://hackage.haskell.org/trac/haskell-prime/wiki/Process


 Cheers,
        Simon
 ___
 Haskell mailing list
 hask...@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Unsafe hGetContents

2009-10-10 Thread Iavor Diatchki
Hello,

well, I think that the fact that we seem to have a program context
that can distinguish f1 from f2 is worth discussing because I
would have thought that in a pure language they are interchangable.
The question is, does the context in Oleg's example really distinguish
between f1 and f2?  You seem to be saying that this is not the
case:  in both cases you end up with the same non-deterministic
program that reads two numbers from the standard input and subtracts
them but you can't assume anything about the order in which the
numbers are extracted from the input---it is merely an artifact of the
GHC implementation that with f1 the subtraction always happens the
one way, and with f2 it happens the other way.

I can (sort of) buy this argument, after all, it is quite similar to
what happens with asynchronous exceptions (f1 (error 1) (error 2)
vs f2 (error 1) (error 2)).  Still, the whole thing does not
smell right:  there is some impurity going on here, and trying to
offload the problem onto the IO monad only makes reasoning about IO
computations even harder (and it is petty hard to start with).  So,
discussion and alternative solutions should be strongly encouraged, I
think.

-Iavor







On Sat, Oct 10, 2009 at 7:38 AM, Duncan Coutts
duncan.cou...@googlemail.com wrote:
 On Sat, 2009-10-10 at 02:51 -0700, o...@okmij.org wrote:

  The reason it's hard is that to demonstrate a difference you have to get
  the lazy I/O to commute with some other I/O, and GHC will never do that.

 The keyword here is GHC. I may well believe that GHC is able to divine
 programmer's true intent and so it always does the right thing. But
 writing in the language standard ``do what the version x.y.z of GHC
 does'' does not seem very appropriate, or helpful to other
 implementors.

 With access to unsafeInterleaveIO it's fairly straightforward to show
 that it is non-deterministic. These programs that bypass the safety
 mechanisms on hGetContents just get us back to having access to the
 non-deterministic semantics of unsafeInterleaveIO.

  Haskell's IO library is carefully designed to not run into this
  problem on its own.  It's normally not possible to get two Handles
  with the same FD...

 Is this behavior is specified somewhere, or is this just an artifact
 of a particular GHC implementation?

 It is in the Haskell 98 report, in the design of the IO library. It does
 not not mention FDs of course. The IO/Handle functions it provides give
 no (portable) way to obtain two read handles on the same OS file
 descriptor. The hGetContents behaviour of semi-closing is to stop you
 from getting two lazy lists of the same read Handle.

 There's nothing semantically wrong with you bypassing those restrictions
 (eg openFile /dev/fd/0) it just means you end up with a
 non-deterministic IO program, which is something we typically try to
 avoid.

 I am a bit perplexed by this whole discussion. It seems to come down to
 saying that unsafeInterleaveIO is non-deterministic and that things
 implemented on top are also non-deterministic. The standard IO library
 puts up some barriers to restrict the non-determinism, but if you walk
 around the barrier then you can still find it. It's not clear to me what
 is supposed to be surprising or alarming here.

 Duncan

 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposals and owners

2009-08-08 Thread Iavor Diatchki
I thought that the intended semantics was supposed to be that the only
element is bottom (hence the proposal to add a related empty case
construct)?

On Thu, Aug 6, 2009 at 3:49 PM, Ross Patersonr...@soi.city.ac.uk wrote:
 On Wed, Jul 29, 2009 at 02:34:26PM -0400, Stephanie Weirich wrote:
 Ok, I've put together a page on EmptyDataDecls:

 http://hackage.haskell.org/trac/haskell-prime/wiki/EmptyDataDecls

 I think this needs a sentence about semantics, to the effect that the
 type is abstract.  (Not that its only element is bottom.)
 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposals and owners

2009-07-31 Thread Iavor Diatchki
+1. I completely agree.

On Fri, Jul 31, 2009 at 6:04 PM, Lennart
Augustssonlenn...@augustsson.net wrote:
 I think that a natural extension to allowing empty data declarations
 would be to allow empty case expressions.

 On Wed, Jul 29, 2009 at 7:34 PM, Stephanie
 Weirichsweir...@cis.upenn.edu wrote:
 Ok, I've put together a page on EmptyDataDecls:

 http://hackage.haskell.org/trac/haskell-prime/wiki/EmptyDataDecls

 Cheers,
 Stephanie
 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: StricterLabelledFieldSyntax

2009-07-26 Thread Iavor Diatchki
Hello,
I am strongly against this change.  The record notation works just
fine and has been doing so for a long time.  The notation is really
not that confusing and, given how records work in Haskell, makes
perfect sense (and the notation has nothing to do with the precedence
of application because there are no applications involved).  In short,
I am not sure what problem is addressed by this change, while a very
real problem (backwards incompatibility) would be introduced.
-Iavor



On Sun, Jul 26, 2009 at 8:52 PM, Isaac
Dupreem...@isaac.cedarswampstudios.org wrote:
 Sean Leather wrote:

 To me, the syntax is not actually stricter, just that the precedence for
 labeled field construction, update,  pattern is lower. What is the
 effective new precedence with this change? Previously, it was 11 (or
 simply
 higher than 10). Is it now equivalent to function application (10)?

 maybe it's equivalent infix 10 (not infixr/infixl) so that it doesn't
 associate with function application (or itself) at all, either left- or
 right- ly.  I didn't understand by reading the patch to the report...

 Ian Lynagh wrote:

 I think that even an example of where parentheses are needed would be
 noise in the report. I don't think the report generally gives examples
 for this sort of thing, e.g. I don't think there's an example to
 demonstrate that this is invalid without parentheses:
    id if True then 'a' else 'b'

 Well that's also something that in my opinion there *should* be an example
 for, because IMHO there's no obvious reason why it's banned (whereas most of
 the Report's syntax repeats things that should be obvious and necessary to
 anyone who knows Haskell).

 -Isaac
 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: StricterLabelledFieldSyntax

2009-07-26 Thread Iavor Diatchki
Hello,

On Sun, Jul 26, 2009 at 10:01 PM, Isaac
Dupreem...@isaac.cedarswampstudios.org wrote:
 Iavor Diatchki wrote:

 Hello,
 I am strongly against this change.  The record notation works just
 fine and has been doing so for a long time.  The notation is really
 not that confusing and, given how records work in Haskell, makes
 perfect sense (and the notation has nothing to do with the precedence
 of application because there are no applications involved).  In short,
 I am not sure what problem is addressed by this change, while a very
 real problem (backwards incompatibility) would be introduced.
 -Iavor

 a different approach to things that look funny, has been to implement a
 warning message in GHC.  Would that be a good alternative?

Not for me. I use the notation as is, and so my code would start
generating warnings without any valid reason, I think.  What would
such a warning warn against, anyway?

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: Deprecate ExistentialQuantification

2009-07-23 Thread Iavor Diatchki
Hello,

Sorry for responding so late---I just saw the thread.  I don't think
that we should deprecate the usual way to define existentials.  While
the GADT syntax is nice in some cases, there are also examples when it
is quite verbose. For example, there is a lot of repetition in
datatypes that have many constructors, especially if the datatype has
parameters and a slightly longer name.  Furthermore, I find the type
variables in the declaration of the type quite confusing because they
have no relation to the type variables in the constructors.  Finally,
there is quite a lot of literature about the semantics of existential
types, while the semantics of GADTs seems quite complex, so it seems a
bit risky to mix up the two.

-Iavor





On Thu, Jul 23, 2009 at 2:47 PM, Niklas Brobergniklas.brob...@gmail.com wrote:
 Discussion period: 2 weeks

 Returning to this discussion, I'm surprised that so few people have
 actually commented yea or nay. Seems to me though that...
 * Some people are clearly in favor of a move in this direction, as
 seen both by their replies here and discussion over other channels.
 * Others are wary of deprecating anything of this magnitude for
 practical reasons.
 * No one has commented in true support of the classic existential
 syntax, only wanting to keep it for legacy reasons.

 I'm in no particular hurry to see this deprecation implemented, and I
 certainly understand the practical concerns, but I would still very
 much like us to make a statement that this is the direction we intend
 to go in the longer run. I'm not sure what the best procedure for
 doing so would be, but some sort of soft deprecation seems reasonable
 to me.

 Further thoughts?

 Cheers,

 /Niklas
 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: Deprecate ExistentialQuantification

2009-07-23 Thread Iavor Diatchki
Hi,
True, but then you have to declare the kind manually.
-Iavor

On Thu, Jul 23, 2009 at 7:36 PM, Sittampalam,
Ganeshganesh.sittampa...@credit-suisse.com wrote:
 One can use the following style of GADT definition, which avoids the
 type variables in the declaration:

 {-# LANGUAGE GADTs, KindSignatures #-}
 module GADT where

 data Foo :: * - * where
  Foo :: Int - Foo Int

 Iavor Diatchki wrote:
 Hello,

 Sorry for responding so late---I just saw the thread.  I don't think
 that we should deprecate the usual way to define existentials.  While
 the GADT syntax is nice in some cases, there are also examples when
 it is quite verbose. For example, there is a lot of repetition in
 datatypes that have many constructors, especially if the datatype has
 parameters and a slightly longer name.  Furthermore, I find the type
 variables in the declaration of the type quite confusing because they
 have no relation to the type variables in the constructors.  Finally,
 there is quite a lot of literature about the semantics of existential
 types, while the semantics of GADTs seems quite complex, so it seems
 a bit risky to mix up the two.

 -Iavor





 On Thu, Jul 23, 2009 at 2:47 PM, Niklas
 Brobergniklas.brob...@gmail.com wrote:
 Discussion period: 2 weeks

 Returning to this discussion, I'm surprised that so few people have
 actually commented yea or nay. Seems to me though that...
 * Some people are clearly in favor of a move in this direction, as
 seen both by their replies here and discussion over other channels.
 * Others are wary of deprecating anything of this magnitude for
 practical reasons.
 * No one has commented in true support of the classic existential
 syntax, only wanting to keep it for legacy reasons.

 I'm in no particular hurry to see this deprecation implemented, and I
 certainly understand the practical concerns, but I would still very
 much like us to make a statement that this is the direction we intend
 to go in the longer run. I'm not sure what the best procedure for
 doing so would be, but some sort of soft deprecation seems
 reasonable to me.

 Further thoughts?

 Cheers,

 /Niklas
 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

 ___
 Glasgow-haskell-users mailing list
 glasgow-haskell-us...@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


 ===
  Please access the attached hyperlink for an important electronic 
 communications disclaimer:
  http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html
  ===


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: what about moving the record system to an addendum?

2009-07-07 Thread Iavor Diatchki
Hello,
I do not think that we should remove the current record/named fields
syntax, at least for the moment.  I use it a lot, and I do not want to
add extra pragmas or extensions to my cabal file.  In fact, one of
the purposes of Haskell', the way I understand it, is exactly to just
choose a stable set of extensions and give a name to them (so
decrease, not increase the number of pragmas).  I think that a new
reocrd/label system is way beyond the scope of Haskell'.  If people
want to experiment with new record systems they may already do so, by
defining a new extension.  A case in point is the Trex record system,
which is implemented in Hugs.
-Iavor

2009/7/7 Ravi Nanavati r...@bluespec.com:
 2009/7/7 Duncan Coutts duncan.cou...@worc.ox.ac.uk:
 On Mon, 2009-07-06 at 18:28 -0700, John Meacham wrote:
 Well, without a replacement, it seems odd to remove it. Also, Haskell
 currently doesn't _have_ a record syntax (I think it was always a
 misnomer to call it that) it has 'labeled fields'. None of the proposed
 record syntaxes fit the same niche as labeled fields so I don't see them
 going away even if a record syntax is added to haskell in the future.

 The people proposing this can correct me if I'm wrong but my
 understanding of their motivation is not to remove record syntax or
 immediately to replace it, but to make it easier to experiment with
 replacements by making the existing labelled fields syntax a modular
 part of the language that can be turned on or off (like the FFI).

 I'm not sure that I agree that it's the best approach but it is one idea
 to try and break the current impasse. It seems currently we cannot
 experiment with new record systems because they inevitably clash with
 the current labelled fields and thus nothing changes.

 I think it is a powerful approach to try and break the current impasse
 for the following reasons:

 1. Once implemented, Hackage and Cabal will soon give us accurate data
 on what publicly available Haskell code does and does not depend on
 NamedFields/TraditionalRecordSyntax/WhateverWeEndUpCallingIt
 2. Once deprecated, people will be encouraged to not depend on the
 traditional record syntax where the cost of avoiding it is small (I'm
 thinking of situations like the mtl-accessors / run functions where
 the traditional syntax is saving something like one function
 definition).
 3. Champions of alternative record syntaxes will know what on Hackage
 they can use out-of-the-box and what things they'd want to consider
 re-writing as examples of how their approach is superior.

 Does anyone have a concrete dea of what it would take to carve out the
 existing syntax as an addendum?

 Thanks,

  - Ravi
 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Mutually-recursive/cyclic module imports

2008-09-01 Thread Iavor Diatchki
Hi,
a free copy is available at:
http://www.purely-functional.net/yav/publications/modules98.pdf
(the source code, is also available at the same site).
Hope that this helps,
-Iavor


On Tue, Aug 26, 2008 at 4:33 PM, John Meacham [EMAIL PROTECTED] wrote:
 On Tue, Aug 26, 2008 at 04:31:33PM -0700, John Meacham wrote:
 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.6.8816

 Doh! wrong paper.

 http://portal.acm.org/citation.cfm?id=581690.581692

 anyone have a free link?

John

 --
 John Meacham - ⑆repetae.net⑆john⑈
 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: PROPOSAL: Make Applicative a superclass of Monad

2008-06-29 Thread Iavor Diatchki
Hello,
I think that this is a good change to make, and I don't think that it
is in any way related to the introduction of class aliases, which is a
fairly major extension (i.e., it requires changes to the compiler),
that we have no experience with, and whose design has not really be
tried out in practise.

As for breaking Haskell'98 code, Haskell' already makes changes that
break existing code (e.g., the new notation for qualified infix
operators). Cabal's ability to track versioned dependencies between
packages should come in handy in implementing this change.

By the way, I once wrote a module that did exactly what Ashley is
proposing and re-exported everything else from the Prelude, so that
you could use it with GHC's no-implicit-Prelude option to try out
the design.  I could dig it out and post it, if anyone is interested.

-Iavor



On Fri, Jun 27, 2008 at 5:16 PM, John Meacham [EMAIL PROTECTED] wrote:
 On Thu, Jun 26, 2008 at 06:25:28PM -0700, Ashley Yakeley wrote:
 I wrote:
 Proposal:
 Make Applicative (in Control.Applicative) a superclass of Monad (in
 Control.Monad).

 So does the silence = approval rule apply here?

 I think that people believe this is generally a good idea, but until the
 actual language that is haskell' is formalized, library issues are on
 the backburner. But yeah, I think cleaning up various things about the
 haskell 98 class hierarchy is a good idea.

 Even if class aliases are not in the standard itself but this change
 was, implementing class aliasse would allow individual compilers to
 provide full back and forwards compatability with haskell 98 and
 haskell'.

 So, that might be a route to having our cake and eating it too. We can
 have the benefit of class aliases without having to break the haskell'
 rules and standardize such an immature extension since they were
 designed to be 'transparent' to code that doesn't know about them.
 Haskell 98 code, and conformant haskell' prime code (as in, code that
 doesn't explicitly mention class aliases) will coexist peacefully and we
 will get a lot of experience using class aliases for when they
 eventually perhaps do get standardized.

John



 --
 John Meacham - ⑆repetae.net⑆john⑈
 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: The monomorphism restriction and monomorphic pattern bindings

2008-04-28 Thread Iavor Diatchki
Hi,


On Mon, Apr 28, 2008 at 9:42 AM, Simon Marlow [EMAIL PROTECTED] wrote:
  Ok.  So I counter-propose that we deal with pattern bindings like this:

   The static semantics of a pattern binding are given by the following
   translation.  A binding 'p = e' has the same meaning as the set of
   bindings

 z = e
 x1 = case z of { p - x1 }
 ...
 xn = case z of { p - xn }

   where z is fresh, and x1..xn are the variables of the pattern p.

  For bang patterns, I think it suffices to say that a bang at the top level
 of p is carried to the binding for z, and then separately define what banged
 variable bindings mean (i.e. add appropriate seqs).

  Does anyone see any problems with this?

Seems good to me.

  Oh, and I also propose to use the terminology variable binding instead of
 simple pattern binding, which is currently used inconsistently in the
 report (see section 4.4.3.2).

This also makes sense.  Perhaps, we should use strict variable
binding instead of banged variable binding as well?

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: RFC: qualified vs unqualified names in defining instance methods

2008-04-26 Thread Iavor Diatchki
Hello,

On Fri, Apr 25, 2008 at 3:00 PM, Simon Marlow [EMAIL PROTECTED] wrote:
 ...
  It would be slightly strange if record construction required the
 unqualified name, but record update required the qualified name, when the
 field name is only in scope qualified.  So that indicates that we should
 allow either form in record construction (and instance declaration), i.e.
 Claus's suggestion + DisambiguateRecordFields.

My preference would be to disallow qualified names in: (i) the method
names of instances, (ii) record construction (C { l = e }), (iii)
record patterns (C { l = p }).  I think that this is consistent, and
it easy easy to see what the labels refer to: in the case of
instances, the method belongs to the class in question (which can be
qualified), and in the case of records the label belongs to the
specified constructor (which can also be qualified).

As Simon points out, record updates still require qualified names but
I don't think that there is an inconsistency there because I think of
record updates as the application of a (oddly named) function, just
like record selection is the application of a (normally named)
function.  Therefore, it makes sense that we may have to use qualified
names to disambiguate which function we are referring to.

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: RFC: qualified vs unqualified names in defining instance methods

2008-04-25 Thread Iavor Diatchki
Hello,
I think that the H98 change was a good one.  Qualified names should
only be used in _uses_ of variables (to disambiguate) and not in
definitions because (hopefully) there is nothing to disambiguate in a
definition.

By the way, method definitions already have a distinction between what
is on the LHS and what is on the RHS. For example, consider the
following instance:

instance Show a = Show (Maybe a) where
   show Nothing = Nothing
   show (Just a)   = Just  ++ show a

Here show is not a recursive function because the show on the RHS
is different from the show on the LHS.
So my preference is to keep the status quo on this issue.

-Iavor



On Fri, Apr 25, 2008 at 7:09 AM, Claus Reinke [EMAIL PROTECTED] wrote:
 consider Haskell 98 report, section 4.3.2 Instance Declarations:

The declarations d may contain bindings only for the class methods of C.
 It is illegal to give a binding for a class method that is not in scope, but
 the   name under which it is in scope is immaterial; in particular, it may
 be a   qualified name. (This rule is identical to that used for subordinate
 names   in export lists --- Section 5.2.) For example, this is legal, even
 though   range is in scope only with the qualified name Ix.range.
  module A where
import qualified Ix

instance Ix.Ix T where
  range = ...

  i consider this confusing (see example at the end), but even
  worse is that the reference to 5.2 appears to rule out the use of qualified
 names when defining instance methods.

  while this abbreviation of qualified names as unqualified names when
 unambiguous may be harmless in the majority of cases, it
  seems wrong that the more appropriate explicit disambiguation
  via qualified names is ruled out entirely.
  i submit that 4.3.2 should be amended so that qualified names are permitted
 when defining instance methods.

  here's an example to show that the unambiguity holds only on the
  lhs of the method definition, and that the forced use of unqualified
  names can be confusing:

module QI where
  import Prelude hiding (Functor(..))
import qualified Prelude (Functor(..))
  data X a = X a deriving Show
  instance Prelude.Functor X where fmap f (X a) = X (f a)
where q = (reverse fmap,Prelude.fmap not [True],reverse QI.fmap)
  fmap = fmap

  note that there are two unqualified uses of 'fmap' in the instance
  declaration, referring to different qualified names:
  - in the lhs, 'fmap' refers to 'Prelude.fmap', which isn't in scope
unqualified, only qualified

  - in the rhs, 'fmap' refers to 'QI.fmap'

  claus


  ___
  Haskell-prime mailing list
  Haskell-prime@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: The monomorphism restriction and monomorphic pattern bindings

2008-04-23 Thread Iavor Diatchki
Hello,
Removing the MR seems reasonable.  I am a little less certain about
the MPB rule though.  I suspect that, as the wiki page points out,
many uses of pattern bindings are monomorphic but still, there seem to
be a number of examples on the wiki where people have run into this
problem.  So I am not sure that the conclusion that nobody has noticed
the change is valid.  I do believe that you can probably work around
the problem in many situations but the question in my mind is why
should we have to work around stuff when we have a system that already
works?  In other words, what problem do MBPs solve?

I should also point out that if we were to adopt the MBP rule, we
would have to adjust the definition of what pattern bindings mean.
For example, I think that this is how things are desugared at the
moment:
(x,y)  = e
becomes
new_var = e
x = case new_var of (v,_) - v
y = case new_var of (_,v) - v

It seems that under MBP the second program is not equivalent to the
first because it is more polymorphic.

-Iavor



On Wed, Apr 23, 2008 at 10:32 AM, Simon Marlow [EMAIL PROTECTED] wrote:
 Folks,

  The current proposal on the table for what to do about the monomorphism
 restriction (henceforth MR) is

   * remove the MR entirely
   * adopt Monomorphic Pattern Bindings (MPB)

  Right now, the committee is almost uniformly in favour of dropping the MR,
 and most of us are coming round to the idea of MPB.  Since this area has
 historically been difficult to achieve a concensus on, I'm excited that we
 appear to be close to making a decision, and a good one at that!

  The arguments for removing the MR are pretty well summarised on the wiki:

  http://hackage.haskell.org/trac/haskell-prime/wiki/MonomorphismRestriction

  You can read about MPB here:


 http://hackage.haskell.org/trac/haskell-prime/wiki/MonomorphicPatternBindings

  GHC has implemented MPB by default (i.e. we deviate slightly from Haskell
 98) since 6.8.1.

  The nice thing about the combination of removing MR and adopting MPB is
 that we retain a way to explicitly declare monomorphic bindings.  These are
 monomorphic bindings:

   ~x = e
   [EMAIL PROTECTED] = e

  or if you don't mind a strict binding: !x = e.  The wiki points out that

   (x) = e

  would also be monomorphic, but arguably this is in poor taste since we
 expect (x) to mean the same as x everywhere.

  Cheers,
 Simon
  ___
  Haskell-prime mailing list
  Haskell-prime@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Status of Haskell Prime Language definition

2007-10-16 Thread Iavor Diatchki
Hello,

On 10/16/07, apfelmus [EMAIL PROTECTED] wrote:
 Iavor Diatchki wrote:
  apfelmus wrote:
  fundeps are too tricky to get powerful and sound at the same time.
 
  I am not aware of any soundness problems related to functional
  dependencies---could you give an example?

 http://hackage.haskell.org/trac/haskell-prime/wiki/FunctionalDependencies#Lossofconfluence

 But I should have said sound, complete and decidable instead :)

There is nothing about the system being unsound there.  Furthermore, I
am unclear about the problem described by the link...  The two sets of
predicates are logically equivalent (have the same set of ground
instances), here is a full derivation:

(B a b, C [a] c d)
using the instance
(B a b, C [a] c d, C [a] b Bool)
using the FD rule
(B a b, C [a] c d, C [a] b Bool, b = c)
using b = c
(B a b, C [a] c d, C [a] b Bool, b = c, C [a] b d)
omitting unnecessary predicates and rearranging:
(b = c, B a b, C [a] b d)

The derivation in the other direction is much simpler:
(b = c, B a b, C [a] b d)
using b = c
(b = c, B a b, C [a] b d, C [a] c d)
omitting unnecessary predicates
(B a b, C [a] c d)

-- 
Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Make it possible to evaluate monadic actions when assigningrecord fields

2007-07-16 Thread Iavor Diatchki

Hello,

I find the naming of values that is introduced by the do notation
useful and I am not at all convinced that the extra sugar that is
being proposed here makes the language simpler.  It seems to me that
the only way to know that a piece of code is safe would be to:
i) do the translation in your head to convince yourself that the
effects will be executed in the right order,
ii) make sure that you are using a commutative monad, or the order of
the effects is not important.

I like the current status quo because:
i) for values I can apply the usual pure reasoning that I am used to in Haskell,
ii) this makes it easier to refactor code, at least for me (e.g., it
is easy to insert a 'seq' here and there to control evaluation if I
have to)
iii) I find that it is easier to understand code that is a bit more
explicit because I have to keep less translations in my head
iv) I can use fmap and ap (and friends, e.g., like what Connor
suggested) to achieve a style that is similar to the imperative one,
when I think that the explicit naming is clumsy.

-Iavor
PS: Someone suggested that this syntactic sugar might be useful in the
context of STM or concurrent programming but I am skeptical about that
example because in that setting the order of effects is very
important... I could be convinced with examples though :-)



On 7/15/07, Chris Smith [EMAIL PROTECTED] wrote:

Hope you don't mind my butting in.

If you're looking for a compelling use case to make programming with
monads more natural in Haskell, I'd say STM makes for a good one.  There
is no question there as to whether a monad is the right way to do STM;
it is required.

In working on some code recently that uses STM rather heavily, what I've
found is that there are a couple things that make the experience
somewhat painful despite the general promise of the technique.  The most
important is fixed by Simon's proposal for monad splices.  I'd literally
jump for joy if something like this were included in a future version of
Haskell!

Frankly, I don't think anyone will be convinced to use a more functional
style by making programming in the STM monad more painful to do in
Haskell.  Instead, they will be convinced to be more hesitant about
using Haskell for concurrent programming.

--
Chris Smith

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: default fixity for `quotRem`, `divMod` ??

2007-06-18 Thread Iavor Diatchki

Hi,
This seems like a good idea.  We should make sure that we are writing
down such bugfixes somewhere besides the mailing list so that they do
not get lost.
-Iavor

On 6/18/07, Isaac Dupree [EMAIL PROTECTED] wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I was just bitten in ghci by `divMod` being the default infixl 9 instead
of the same as `div` and `mod`.  Sure enough, the standard prelude
doesn't specify a fixity for `quotRem` and `divMod` even though `quot`,
`rem`, `div`, and `mod` are all infixl 7.  I propose that `quotRem` and
`divMod` also become infixl 7.

Isaac
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGdr7dHgcxvIWYTTURAu07AKCb0RAI343lnRlH1FgI1rMy0dx1FQCfcnsV
g6HUB5vDVbk9LPGi51WpY+o=
=iESL
-END PGP SIGNATURE-
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Wanted: warning option for usages of unary minus

2007-05-14 Thread Iavor Diatchki

Hello,

I agree with Simon on this one: x-1 should parse as expected (i.e.,
the infix operator - applied to two arguments x and 1). Having
this result in a type error would be confusing to both beginners and
working Haskell programmers.

I think that if we want to change anything at all, we should simply
eliminate the unary negation operator without changing the lexer
(i.e., we would have only positive literals).  Then we would have to
be explicit about what is currently happening implicitly in
Haskell98---we would write negate 1 instead of -1.

However, I don't thinks that this change is justified---as far as I
can see, the only benefit is that it simplifies the parser.  However,
the change is not backward compatible and may break some programs.

-Iavor

On 5/14/07, Simon Marlow [EMAIL PROTECTED] wrote:

John Meacham wrote:
 On Wed, Apr 11, 2007 at 09:05:21AM +0100, Simon Marlow wrote:
 I definitely think that -1# should be parsed as a single lexeme.
 Presumably it was easier at the time to do it the way it is, I don't
 remember exactly.

 I'd support a warning for use of prefix negation, or alternatively you
 could implement the Haskell' proposal to remove prefix negation completely
 - treat the unary minus as part of a numeric literal in the lexer only.
 This would have to be optional for now, so that we can continue to support
 Haskell 98 of course.

 yes please! odd that I look forward to such a minor change in the big
 scheme of things, but the current treatment of negation has annoyed me
 more than any other misfeature I think.

Really?  I'm beginning to have second thoughts about the proposed change to
negation for Haskell'.  The main reason, and this isn't pointed out as well as
it should be on the wiki, is that x-1 will cease to be an infix application of
(-), it will parse as x applied to the literal (-1).  And this is different from
x - 1 (syntax in which whitespace matters should be avoided like the plague,
IMO).  I think this would be worse than the current situation.

Cheers,
Simon


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Polymorphic strict fields

2007-04-30 Thread Iavor Diatchki

Hello,

At present, the Haskell report specifies the semantics of strict
datatype fields (the ones marked with !) in terms of the strict
application operator $! [Section 4.2.1, paragraph Strictness flags].
However, if we were to add polymorphic fields to Haskell, then we
cannot use this definition anymore because the application operator
(or ``seq`` for that matter) does not have the correct type---we would
need many rank-2 versions, one for each possible type scheme.

Notice, furthermore, that the behavior of such constructors may be a
bit unexpected when combined with overloading.  Consider, for example,
the following declarations:


data T = T !(forall a. Eq a = a)
test = seq (T undefined) True


In GHC 6.6 ``test`` evaluets to ``True`` because ``undefined`` is
converted to a function that expects its implict evidence argument.
Hugs does not support strictness annotations on polymorphic fields (I
thought that this was a bug but, perhaps, it was a delibarate choice?)

It seems that even without overloading, we may think of polymorphic
values as being parameterized by their type arguments, so that they
are always in head-normal form.

All of this leads me to think that perhaps we should not allow
strictness annotations on polymorphic fields.  Would people find this
too restrictive? If so, does anoyone have ideas about what should
happen and how to specify the behavior?

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: type aliases and Id

2007-03-19 Thread Iavor Diatchki

Hello,

On 3/19/07, Lennart Augustsson [EMAIL PROTECTED] wrote:

Ravi,

Ganesh and I were discussing today what would happen if one adds Id
as a primitive type constructor.  How much did you have to change the
type checker?  Presumably if you need to unify 'm a' with 'a' you now
have to set m=Id.  Do you know if you can run into higher order
unification problems?  My gut feeling is that with just Id, you
probably don't, but I would not bet on it.


It seems to me that even with just ''Id'' the problem is tricky.
Suppose, for example, that we need to solve ''f x = g y''.  In the
present system we can reduce this to ''f = g'' and ''x = y'''.
However, if we had ''Id'', then we would have to delay this equation
until we know more about the variables that are involved (e.g., the
correct solution might be ''f = Id'' and ''x = g y'').

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: do-and-if-then-else modification

2007-02-19 Thread Iavor Diatchki

Hello,

On 2/18/07, Benjamin Franksen [EMAIL PROTECTED] wrote:

Section 3.6  Conditionals would have to be changed accordingly. It still
says exp   -   if exp1 then exp2 else exp3.

Thanks! I fixed this too.

Just for the record, I don't think that this change is particularly
useful at all.  We made the change because it is on the definitely
in list accepted by the Haskell' committee and it seemed quite easy
to do.

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: rank-2 vs. arbitrary rank types

2007-02-06 Thread Iavor Diatchki

Hello,

Thanks for the responses!  Here are my replies (if the email seems too
long please skip to the last 2 paragraphs)

Simon PJ says:

Hmm.  To be consistent, then, you'd have to argue for rank-2 data constructors 
only,
since rank-2 functions can be simulated in the way you describe.


I think that this is an entirely reasonable point in the design space.
In fact, in the few situations where I have needed to use rank-2
types this has been sufficient.  However, there is nothing
inconsistent in allowing other constants besides constructors to have
rank-2 types.

Malcolm says:

The same position could be used to argue against higher-order types.
All higher-order programs can be reduced to first-order programs by
firstification (encoding functional arguments as data values, which are
then interpreted).  Yet most functional programmers agree that being
able to write higher-order definitions directly is more convenient.


This is not an accurate comparison: in functional programs functions
are first class values, while in the rank-N (and rank-2) proposal type
schemes and simple types are different.  The rank-N proposal allows
programmers to treat type schemes as types in some situations but not
others (e.g., the function space type constructor is special).  For
example, in Haskell 98 we are used to be able to curry/uncurry
functions at will but this does not work with the rank-N extension:


f :: (forall a. a - a) - Int - Char  -- OK
g :: (forall a. a - a, Int) - Char -- not OK


Your point is well taken though, that perhaps in other situations the
rank-N extension would be more convenient.  It would be nice to have
concrete examples, although I realize that perhaps the added
usefulness only comes up for more complex programs.

Andres says:

Of course it's easier to define abbreviations for complex types,
which is what type is for ... However, if you define new datatypes,
you have to change your code, and applying dummy constructors everywhere
doesn't make the code more readable ...


Perhaps we should switch from nominal to structural type equivalence
for Haskell' (just joking :-)  I am not sure what you mean by
changing the code: with both systems we have to change from the
usual Haskell style: for rank-N we have to write type signatures, if
we can, while in the rank-2 design we use data constructors for the
same purpose.  Note that in some situations we might not be able to
write a type signature, for example, if the type mentions a local
variable:


f x = let g :: (forall a. a - a) - (Char,?)
  g h = (h 'a', h x)
  in ...


Of course, we could also require some kind of scoped type variables
extension but this is not very orthogonal and (as far as I recall)
there are quite a few choice of how to do that as well...

Anyways, it seems that most people are in favor of the rank-N
proposal.  How to proceed?  I suggest that we wait a little longer to
see if any more comments come in and then if I am still the only
supporter for rank-2 we should be democratic and go with rank-N :-)
If anyone has pros and cons for either proposal (I find examples very
useful!) please post them.

I guess another important point is to make sure that when we pick a
design, then we have at least one (current) implementation that
supports it (ideally, all implementations would eventually).  Could we
get a heads up from implementors about the the current status and
future plans in this area of the type checker?

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: rank-2 vs. arbitrary rank types

2007-02-03 Thread Iavor Diatchki

Hello,
(I'll respond on this thread as it seems more appropriate)

Simon PJ's says:

* Rank-N is really no harder to implement than rank-2.  The Practical type
inference.. paper gives every line of code required.  The design is certainly
much cleaner and less ad-hoc than GHC's old rank-2 design, which I dumped
with considerable relief.


I am not too concerned about the difficulty of implementation,
although it seems to me that implementing the rank-2 extension
requires a much smaller change to an existing Haskell 98 type checker.
My main worry about the rank-N design is that (at least for me) it
requres a fairly good understanding of the type checking/inference
_algorithm_ to understand why a program is accepted or rejected.  Off
the top of my head, here is an example (using GHC 6.4.2):


g :: (forall a. Ord a = a - a) - Char
g = \_ - 'a' -- OK
g = g   -- OK
g = undefined  -- not OK


Simon PJ says:

* I think it'd be a pity to have an arbitrary rank-2 restriction.  Rank-2 
allows you to
abstract over a polymorphic function.  Well, it's no too long before you
startwanting to abstract over a rank-2 function, and there you are.


I don't think that the rank-N system is any more expressive then the
rank-2 one.  The reason is that by placing a polymorphic value in a
datatype we can decrese its rank.  In this way we can reduce a program
of any rank to just rank-2.  So it seems that the issue is one of
software engineering---perhaps defining all these extra types is a
burden?  In my experience, defining the datatypes actually makes the
program easier to understand (of course, this is subjective) because I
find it difficult to parse all the nested forall to the left of
arrows, and see where the parens end (I susupect that this is related
to Simon's next point).  In any case, both systems require the
programmer to communicate the schemes of polymorphic values to the
type checker, but the rank-2 proposal uses construcotrs for this
purpose, while the rank-N uses type signatures

Simon PJ says:

* The ability to put foralls *after* a function arrow is definitely useful, 
especially
when type synonyms are involved.  Thus
  type T = forall a. a-a
  f :: T - T
We should consider this ability part of the rank-N proposal. The Practical type
inference paper deal smoothly with such types.  GHC's rank-2 design had an
arbitrary and unsatisfactory forall-hoisting mechanism which I hated.


There seem to be two issues here:
1) Allow 'forall's to the right of function arrows.  This appears to
be independent of the rank-2/N issue because these 'forall' do not
increase the rank of a function, so it could work in either system (I
have never really needed this though).
2) Allow type synonyms to name schemes, not just types.  I can see
that this would be quite useful if we program in the rank-N style as
it avoids the (human) parsing difficulties that I mentioned while
responing to the previous point.  On the other hand, the following
seems just as easy:


newtype T = T (forall a. a - a)
f (T x) = ...


Simon PJ says:

* For Haskell Prime we should not even consider option 3.  I'm far from
convinced that GHC is occupying a good point in the design space; the bug that
 Ashley points out (Trac #1123) is a good example.  We just don't know enough
about (type inference for) impredicativity.


This is good to know because it narrows our choices!  On the other
hand, it is a bit unfortunate that we do not have a current
implementation that implements the proposed rank-N extension.  I have
been using GHC 6.4.2 as an example of the non-boxy version of the
rank-N proposal, is this reasonable?

Simon PJ says:

In short, Rank-N is a clear winner in my view.


I am not convinced.  It seems to me that the higher rank polymorphism
extension is still under active research---after only 1 major version
of existsince, GHC has already changed the way it implements it, and
MLF seems to have some interesting ideas too.   So I think that for
the purposes of Haskell' the rank-2 extension is much more
appropriate:  it gives us all the extra expressive power that we need,
at very little cost to both programmers and implementors (and perhaps
a higher cost to Haskell semanticists, which we should not forget!).

And now it is time for lunch! :)
-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Polymorphic components, so far

2007-02-01 Thread Iavor Diatchki

Hello,
Thanks to everyone who took time to comment on my notes.  My  Isaac's
previous post spawned a few separate discussions so I though I'd send
a separate message to summarize the status of what has happened so far
with regard to polymorphic components.

* Rank-2 vs Rank-n types.  I think that this is the most important
issue that we need to resolve which is why I am placing it first :-)
Here are our options:
 - option 1: Hugs style rank-2 types (what I described, very briefly)
 - option 2: GHC 6.4 style rank-2 types. As far as I understand,
these are the details:
 * Based on Putting Type Annotations to Work.
* Predicative (type variables range only over simple (mono) types)
 We do not need to compare schemes for equality but, rather, we
compare them for generality, using a kind of sub-typing.  There

* Notation for polymorphic types with explicit quantifiers.  The main
issue is if we should allow some corner case notational issues, such
as empty quantifier lists, and quantified variables that are not
mentioned in the type.
 - option 1: disallow these cases because they are likely to be
accidental mistakes.
 - option 2: allow them because they make automatic program generation simpler.
My initial proposal was suggesting 2 but I think that, having heard
the arguments, I am leaning towards option 1.

* Equality of schemes for labeled fields in different constructors.
My suggestion did not seem to be too controversial.  Stephanie is
leaning towards a more semantic comparison of  schemes.  Indeed, just
using alpha equivalence might be a bit too weak in some cases.
Another, still fairly syntactic option, would be to pick a fixed order
for the variables in quantifiers (e.g., alphabetic) for the purposes
of comparison.

* Pattern matching on polymorphic fields.  This does not appear to be
too controversial, although Atze  had some reservations about this
design choice.

This is all that I recall, apologies if I missed something (if I did
and someone notices, please post a replay so that we can keep track of
what is going on).

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Polymorphic components, so far

2007-02-01 Thread Iavor Diatchki

Hello,
(Apologies for the two emails, I accidentally hit the send button on
my client before I had finished the first e-mail...)

* Rank-2 vs Rank-n types.  I think that this is the most important
issue that we need to resolve which is why I am placing it first :-)
Our options (please feel free to suggest others)
 - option 1: Hugs style rank-2 types (what I described , very
briefly, on the ticket)
* Based on From Hindley Milner Types to First-Class Structures
* Predicative
* Requires function with rank-2 types to be applied to all their
polymorphic arguments.
 - option 2: GHC 6.4 style rank-N types. As far as I understand,
these are the details:
* Based on Putting Type Annotations to Work.
* Predicative
* We do not compare schemes for equality but, rather, for
generality, using a kind of sub-typing.
* Function type constructors are special (there are two of them)
because of co/contra variance issues.
 - option 3: GHC 6.6 style rank-N types.  This one I am less familiar
with but here is my understanding:
 * Based on Boxy types: type inference for higher-rank types and
impredicativity
 * Impredicative (type variables may be bound to schemes)
 * Sometimes we compare schemes for equality (this is
demonstrated by the example on ticket 57) and we also use the
sub-typing by generality on schemes
 * Again, function types are special

So far, Andres and Stephanie prefer a system based on rank-N types
(which flavor?), and I prefer the rank-2 design.  Atze would like a
more expressive system that accepts the example presented on the
ticket.

I think this is all.
-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Polymorphic Components

2007-01-21 Thread Iavor Diatchki

Hello,

I have written some notes about changes to Haskell 98 that are
required to add the polymorphic components extension.   The purpose
of the notes is to enumerate all the details that need to be specified
in the Haskell report.  I don't have access to the haskell-prime wiki
so I attached the notes to the ticke for polymorphic components:
http://hackage.haskell.org/trac/haskell-prime/ticket/57

When there are different ways to do things I have tried to enumerate
the alternatives and the PROPOSAL paragraph marks the choice that I
favor.

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Are pattern guards obsolete?

2006-12-13 Thread Iavor Diatchki

Hi,

I am not clear why you think the current notation is confusing...
Could you give a concrete example?  I am thinking of something along
the lines:  based on how - works in list comprehensions and the do
notation, I would expect that pattern guards do XXX but instead, they
confusingly do YYY.  I think that this will help us keep the
discussion concrete.

-Iavor


On 12/13/06, Yitzchak Gale [EMAIL PROTECTED] wrote:

Philippa Cowderoy wrote:
 This is what I get for replying straight away!

Oh, no, I'm happy that you responded quickly.

 I think my point is that I'm not aware of many people
 who actually think this is a problem or get confused.

Well, I don't mean that this is something that experienced
Haskell programmers will stop and scratch their heads
over.

But the more of these kinds of inconsistencies you have,
the worse it is for a programming language. The effect
is cumulative. When there are too many of them, they make
the language feel heavy, complex, and inelegant. They
increase the number of careless errors. They put
off beginners.

Regards,
Yitz
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: character literal question

2006-12-02 Thread Iavor Diatchki

Hello,


It does actually make syntax hilighting more complex, and introduces
another special case.


How is that another special case? If anything, it seems to be removing
a special case because there is no need for an escape of the form \'.
Do you have a concrete example of what it complicates?


It's also no longer the same as strings, since
 doesn't mean \.


Fair enough, but it is easy to explain why these are different.  Can
we explain why we need to escape the quote in character literals?  So
far we have: (i) we want to be different from Ada, and (ii) we want to
do the same as C and Java.

A nice property of the small change that I proposed is that to switch
between a singleton string and a character we just need to change the
quotes.  With the current notation we have to also add a backslash if
the string happened to be ', which seems redundant.

Anyways, this is not a big thing but it seems like a wrinkle that
could be fixed.

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


character literal question

2006-12-01 Thread Iavor Diatchki

Hello,
Is there a reason why we have to escape the character ' (apostrophe)
when used in a character literal?  For example, we have to write '\''
instead of '''.  (With syntax highlighting, the second is a lot better
looking than the first.)  It seems that in this case we do not need
the escape because a literal contains exactly one character.  If there
is no good reason for having the scape, I think that we should remove
this restriction in Haskell'.
-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Teaching

2006-11-30 Thread Iavor Diatchki

Hello,

On 11/30/06, Philippa Cowderoy [EMAIL PROTECTED] wrote:

On Wed, 29 Nov 2006, Ashley Yakeley wrote:

 That something might confuse the beginning user should count for nothing if it
 does not annoy the more experienced user.


This experienced user regularly uses a haskell interpreter for a desk
calculator, not to mention for producing readable budgets that show all
the working. Removing defaulting would make that extremely tedious.


I do what you suggest all the time (I mean using Hugs as a calculator)
and I agree that it would be annoying without some sort of defaulting.
However, I am not sure that this particular use justifies the
addition of defaulting to the _language_.  For example, it is possible
that defaulting is implemented as a switch to the command-line
interpreter.

By the way, I agree with Ashley that the it is hard to teach
argument is over-used.

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Re[2]: Teaching

2006-11-30 Thread Iavor Diatchki

Hello,

On 11/30/06, Bulat Ziganshin [EMAIL PROTECTED] wrote:

Hello Iavor,
how about using Haskell for scripting? i find it as great alternative
to perl/ruby, particularly because i don't want to remember two
languages, particularly because of great data processing instruments


I am speculating, but I suspect that the uses of defaulting in a
Haskell program (e.g. a script) are a lot fewer than when one uses
Hugs as a calculator.  In addition, as people already mentioned,
ambiguities can be resolved with type signatures.We already have to do
this anyway when defaulting does not apply (e.g., if any user defined
classes are involved).

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: digit groups

2006-10-25 Thread Iavor Diatchki

Hello,
while people are discussing different notations for literals, I
thought I should mention that in my work I have found it useful to
write literals ending in K (kilo), M (mega) or G (giga) for large
numbers.  For example, I can write 4K for (4 * 2^10), or 8M for (8 *
2^20) or 2G for (2 * 2^30).  It is simple to implement because it only
requires a modification to the lexer.  It is not really necessary
because we can achieve the same by defining:
kb,mg,gb :: Num a = a
kb = 1024
mb = 1024 * kb
gb = 1024 * mb
and now we can write (4 * kb) instead for 4096.  I still think that
the literal notation
is nicer though.
-Iavor



On 10/25/06, John Meacham [EMAIL PROTECTED] wrote:

just for fun, I have implemented this for jhc.
you can now write numbers like 10_000_000 if you choose.

I have not decided whether I like the feature or not. but what the heck.

John


--
John Meacham - ⑆repetae.net⑆john⑈
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Monomorphism restriction

2006-10-14 Thread Iavor Diatchki

Hello,

On 10/14/06, Bulat Ziganshin [EMAIL PROTECTED] wrote:

Hello haskell-prime,

first is the monomorphism restriction. why isn't it possible to check
_kind_ of parameter-less equation and apply monomorphism restrictions
only to values of kind '*'? so, this:

sum = foldr1 (*)

will become polymorphic because its kind is '*-*' while this


Kinds are use to classify _types_ and not _expressions_.  In this
case, the expression has the type a - a, with the constraint that
a is a numeric type.
This type is of kind *.  In general, all the executable parts of a
Haskell program have types that are of kind *.  The things that have
kinds like * - * are type constructors.

On the other hand, you might wonder why not use the _type_ of an
expression in the monomorphism restriction (MR) and only monomorphize
expressions of a non-functional type.  The reason this would not work
is that the MR is used to avoid repeated evaluation of overloaded
expressions when they are instantiated to the same types, but the type
of an expression does not tell us if the expression is already
evaluated. Remember that in a functional language functions are values
like any other, and so we might have to do computation to evaluate
them.  This is why the MR uses a syntactic heuristic.

Having said that, many people find the MR weird because it is kind of
implementation specific.  Even without the MR, a compiler could
perform an optimization that notices that a value is being
instantiated with the same dictionary multiple times and avoid the
repeated evaluation.  The MR is especially weird if we consider the
fact that the Haskell report does not even require implementations to
have sharing (e.g., call by name semantics are OK),  so implementation
may duplicate evaluation at will...

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal for stand-alone deriving declarations?

2006-10-05 Thread Iavor Diatchki

Hello,
A question about the syntax:  would there be a problem if we made the
'deriving' declaration look like an instance?  Then we would not need
the special identifier 'for', and also we have a more standard looking
notation.  I am thinking something like:
deriving Show SomeType
deriving Eq (AnotherType a)
-Iavor



On 10/5/06, Björn Bringert [EMAIL PROTECTED] wrote:

Simon Peyton-Jones wrote:
 | What is not so nice is that you take a new keyword ('for'), which is
 | quite likely to have been used as a variable name in existing code.
 (Or
 | does it work out to use one of the 'special' names here?)

 The latter is what Bjorn has done.  That is, 'for' is only special in
 this one context.  You can use it freely otherwise.  As I understand it
 anyway.

Yes. There is even a for function somewhere in the libraries (or was
it the testsuite, can't remeber), which tripped up one of my early
versions, before I had remembered to make for as a special ID in the
parser.

 | I think it would be useful to write the proposal in complete detail up
 | on the Haskell' wiki.

 Yes please.  Bjorn?  (It may just be a qn of transcribing the user
 manual stuff you have written.)

Sure. It seems that I have to be on the committee to write to the Wiki.
Can I join it?

/Björn
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Pattern guards

2006-09-28 Thread Iavor Diatchki

Hello,
I think that pattern guards are a nice generalization of ordinary
guards and they should be added to the language.  Of course, as you
point out, we can encode them using the Maybe monad, but the same is
true for nested patterns, and I don't think that they should be
removed from Haskell.  I think that the benefit of adding pattern
guards is similar to that of using nested patterns: it provides a
concise notation that is easy to explain and understand without having
to first learn about monads (even though there is a monad that is
hidden behind the scenes).  This, combined with the fact that pattern
guards are quite easy to implement, I think advocates in their favor.
-Iavor



On 9/28/06, Yitzchak Gale [EMAIL PROTECTED] wrote:

I would like to suggest a correction to ticket #56,
Pattern Guards.

It is easy to show that every expression written
using pattern guards can also be written in
Haskell 98 in a way that is essentially
equivalent in simplicity. (Proof below.)

In my opinion, the Haskell 98 version below is
more clear than the pattern guard version - it
makes the monad explicit. Even if you disagree,
I think it would be very difficult to argue that the
difference is important enough to justify the extreme
measure of adding new syntax to the language.

Therefore, the first two items under Pros are
false, and should be removed. The only remaining
Pro is that the extension is well-specified, which
has no value on its own.

The purpose of Haskell' is to remove warts from
Haskell, not add new ones. Pattern guards are
a serious wart - they further overload syntax that
is arguably already overused, as pointed out
in the referenced paper by Martin Erwig and
Simon Peyton Jones [EPJ].

I hope that there is still time to retract the evil decree
of definitely in Proposal Status for this ticket.

Regards,
Yitz

Proof: We first assume that the following declarations
are available, presumably from a library:

 data Exit e a = Continue a | Exit {runExit :: e}
 instance Monad (Exit e) where
   return = Continue
   Continue x = f = f x
   Exit e = _ = Exit e

(Note that this is essentially the same as the Monad
instance for Either defined in Control.Monad.Error,
except without the restriction that e be an instance
of Error.)

 maybeExit :: Maybe e - Exit e ()
 maybeExit = maybe (return ()) Exit

Now given any function binding using pattern guards:

funlhs
 | qual11, qual12, ..., qual1n = exp1
 | qual21, qual22, ..., qual2n = exp2
 ...

we translate the function binding into Haskell 98 as:

funlhs = runExit $ do
  maybeExit $ do {qual11'; qual12'; ...; qual1n'; return (exp1)}
  maybeExit $ do {qual21'; qual22'; ...; qual2n'; return (exp2)}
  ...

where

  qualij' - pat - return (e) if qualij is pat - e
  qualij' - guard (qualij) if qualij is a boolean expression
  qualij' - qualij if qualij is a let expression

For a conventional guard:

 | p = exp

we can simplify the translation to:

  when (p) $ Exit (exp)

Simplifications are also possible for other special cases.

This concludes the proof. Here are some examples, taken
from [EPJ]:

 clunky env var1 var2
  | Just val1 - lookup env var1
  , Just val2 - lookup env var2
  = val1 + val2
  ...other equations for clunky

translates to:

 clunky env var1 var2 = runExit $ do
   maybeExit $ do
 val1 - lookup env var1
 val2 - lookup env var2
 return (val1 + val2)
  ...other equations for clunky

 filtSeq :: (a-Bool) - Seq a - Seq a
 filtSeq p xs
  | Just (y,ys) - lview xs, p y = lcons y (filtSeq p ys)
  | Just (y,ys) - lview xs  = filtSeq p ys
  | otherwise= nil

translates to:

 filtSeq :: (a-Bool) - Seq a - Seq a
 filtSeq p xs = runExit $ do
   maybeExit $ do
 (y,ys) - lview xs
 guard $ p y
 return $ lcons y $ filtSeq p ys
   maybeExit $ do
 (y,ys) - lview xs
 return $ filtSeq p ys
   Exit nil

Note that in this case, the Maybe monad alone is
sufficient. That eliminates both the double lookup and
the double pattern match, as discussed in [EPJ]:

 filtSeq :: (a-Bool) - Seq a - Seq a
 filtSeq p xs = fromMaybe nil $ do
   (y,ys) - lview xs
   return $ if (p y)
 then lcons y (filtSeq p ys)
 else filtSeq p ys
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Re[2]: Pattern guards

2006-09-28 Thread Iavor Diatchki

Hello,
This particular example we can do with pattern guards
(although it seems that a simple 'case' is more appropriate for this example)

On 9/28/06, Bulat Ziganshin [EMAIL PROTECTED] wrote:

Hello Conor,

Thursday, September 28, 2006, 10:30:46 PM, you wrote:

   gcd x y | compare x y -
 LT = gcd x (y - x)
 GT = gcd (x - y) y
   gcd x _ = x


mygcd x y
 | LT - z   = mygcd x (y-x)
 | GT - z   = mygcd (x-y) y
 where
 z = compare x y

mygcd x _ = x

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: map and fmap

2006-08-28 Thread Iavor Diatchki

Hello,

On 8/28/06, John Hughes [EMAIL PROTECTED] wrote:

No, map was never overloaded--it was list comprehensions that were
overloaded as monad comprehensions in Haskell 1.4. That certainly did lead
to problems of exactly the sort John M is describing.


I just checked the reports for Haskell 1.3 and 1.4 on the Haskell
website and they both state that the method of 'Functor' was 'map'.  I
only started using Haskell towards the end of 1.4, so I don't have
much experience with those versions of the language, but it seems that
having an overloaded 'map' was not much of a problem if only a few
people noticed.



As for an example of fmap causing trouble, recall the code I posted last
week sometime:

class Foldable f where
  fold :: (a - a - a) - a - f a - a

instance Foldable [] where
  fold = foldr

example = fold (+) 0 (fmap (+1) (return 2))

Here nothing fixes the type to be lists. When I posted this, someone called
it contrived because I wrote return 2 rather than [2], which would have
fixed the type of fmap to work over lists. But I don't think this is
contrived, except perhaps that I reused return from the Monad class, rather
than defining a new collection class with overloaded methods for both
creating a singleton collection and folding an operator over a collection.
This is a natural thing to do, in my opinion, and it leads directly to this
example.


I don't think this example illustrates a problem with 'fmap'.  The
problem here is that we are using both an overloaded constructor
(return) and destructor (fold), and so nothing specifies the
intermediate representation.   The fact that 'map' removed the
ambiguity was really an accident. What if we did not need to apply a
function to all elements?

example = fold (+) 0 (return 2)

It seems that we could use the same argument to reason  the 'return'
should have the type 'a - [a]', or that we should not overload
'fold', which with the above type seems to be fairly list specific.

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: map and fmap

2006-08-23 Thread Iavor Diatchki

Hello,

On 8/22/06, John Meacham [EMAIL PROTECTED] wrote:

I am not talking about type signatures, I am talking about having to
annotate in the middle of a term.

f x y | x `member` map g freeVars y  = 

having to become

f x y | x `member` map g (freeVars y :: [Id])  = 


There is no need to write such types... In this particular case the
type of 'elem' indicates that the argument is a list.  I don't think
that a polymorphic 'map' function requires any more signatures than,
say, '='.  This certainly is not my experience when I use 'fmap'...


So, I am not saying renaming fmap to map is bad outright, I am just
saying that the question is trickier than just the error message problem
it was previously stated in terms of.


Do you have an example illustrating what is tricky about 'fmap'?  As
far as I understand 'map' used to be polymorphic, and later the
distinction between 'map' and 'fmap' was specifically introduced to
avoid the error messages that may confuse beginners.

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: map and fmap

2006-08-22 Thread Iavor Diatchki

Hello,
I agree that this is a small change, and I don't expect that it will happen.

On 8/21/06, John Meacham [EMAIL PROTECTED] wrote:

Yeah, the change doesn't seem worth it to me. And I still have concerns
about ambiguity errors, if a beginner ever has to use an explicit type
signature it sort of ruins the whole type inference benefit.


There is a big difference between having to declare all types vs.
writing type signatures only in some places.

In any case, it seems to me that it is good to encourage beginners to
write type signatures, because
(i) it clears their thinking about what they are trying to do,
(ii) it leads to more accurate type errors, because the system can
detect if it is the definition of a function that is wrong or its use.
In fact, I write type signatures for the same reasons.



I think
everyone has tried to write

class Cast a b where
cast :: a - b

at some point but found it not very useful as whenever it was fed or
used as an argument to another overloaded function, you ended up with
ambiguity errors.

with all the added generality being added all over the place, you need
collections of functions that work on concrete data types to 'fix'
things every now and again. lists are common enough that I think they
deserve such 'fixing' functions. And it has nothing to do with newbies.

having to write type annotations when not even doing anything tricky is
not an acceptable solution for a type infered language.


The problem you are describing above is entirely different...
I agree that we should not overload everything, after all there must
be some concrete types in the program.

However, having a 'map' function that is specialized to lists in the
standard library seems quite ad-hoc to me, in a way it is comparable
to saying that 'return' should be specialized to IO, and we should
have 'mreturn' in the Monad class (I am not suggesting this! :-)

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


map and fmap

2006-08-14 Thread Iavor Diatchki

Hello,
I never liked the decision to rename 'map' to 'fmap', because it
introduces two different names for the same thing (and I find the name
`fmap' awkward).

As far as I understand, this was done to make it easier to learn
Haskell, by turning errors like Cannot discharge constraint 'Functor
X' into X =/= List.  I am not convinced that this motivation is
justified, although I admit that I have very limited experience with
teaching functional programming to complete beginners.  Still,
students probably run into similar problems with overloaded literals,
and I think, that a better approach to problems like these would be to
have a simplified learning Prelude for the beginners class, rather
than changing the standard libraries of the language (writing type
signatures probably also helps...)

Renaming 'fmap' to 'map' directly would probably break quite a bit of
code, as all instances would have to change (although it worked when
it was done the other way around, but there probably were fewer
Haskell programs then?).  We could work around this by slowly phasing
out 'fmap': for some time we could have both 'map' and 'fmap' to the
'Functor' class, with default definitions in terms of each other.  A
comment in the documentation would say that 'fmap' is deprecated.  At
some point, we could move 'fmap' out of the functor class, and even
later we could completely remove it.

This is not a big deal, but I thought I'd mention it, if we are
considering small changes to the standard libraries.

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: termination for FDs and ATs

2006-05-03 Thread Iavor Diatchki

Hello,

On 5/3/06, Stefan Wehr [EMAIL PROTECTED] wrote:

class C a
class F a where type T a
instance F [a] where type T [a] = a
class (C (T a), F a) = D a where m :: a - Int
instance C a = D [a] where m _ = 42

If you now try to derive D [Int], you get

 ||- D [Int]
subgoal: ||- C Int-- via Instance
subgoal: ||- C (T [Int])  -- via Def. of T in F
subgoal: ||- D [Int]  -- Superclass


I do not follow this example.

If we are trying to prove `D [Int]` we use the instance to reduce the
problem to `C Int`, and then we fail because we cannot solve this
goal.

If we are trying to validate the second instance, then we need to
prove that the context of the instance is sufficient to discharge the
super class requirements:
C a |- C (T [a]), F [a]
We can solve `F [a]` using the instance, and (by using the definition
of `T`) `C (T [a])` becomes `C a`, so we can solve it by assumption.

-iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: FDs and confluence

2006-04-15 Thread Iavor Diatchki
Hello,

On 4/13/06, Ross Paterson [EMAIL PROTECTED] wrote:
 They are equivalent, but C [a] b d, Num c and C [a] c d, Num c are not.

I agree that this is the case if you are thinking of
forall a b c d. (C [a] b d, Num c) = (C [a] c d, Num c)
Here is a counter example (assume we also add an instance B Char Int
to the example)
a = Char, b = Char, c = Int, d = Bool
LHS: (C [Char] Char Bool, Num Int) = B Char Char = False
RHS: (C [Char] Int Bool, Num Int) = B Char Int = True

However, I don't think this is the case if you are thinking of:
forall a c d. exists b. (C [a] b d, Num c) = (C [a] c d, Num c)
becuase I can pick 'b' to be 'c'.

In any case I think this thread is drifting in the wrong direction.  I
was just looking for a concrete example of where we have a problem
with the non-conflence that people have found, e.g. somehitng like
'here is an expression for which we can infer these two schemas, and
they have different sets of monomorphic instances'.  I think such an
example would be very useful for me (and perhaps others) to clarify
exactly what is the problem, so that we can try to correct it.

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: FDs and confluence

2006-04-13 Thread Iavor Diatchki
Hello,

On 4/12/06, Claus Reinke [EMAIL PROTECTED] wrote:
 that's why Ross chose a fresh variable in FD range position:
 in the old translation, the class-based FD improvement rule no
 longer applies after reduction because there's only one C constraint
 left, and the instance-based FD improvement rule will only instantiate
 the 'b' or 'c' in the constraint with a fresh 'b_123', 'b_124', ..,
 unrelated to 'b', 'c', or any previously generated variables in the
 constraint store.

I understand the reduction steps.  Are you saying that the problem is
that the two sets are not syntactically equal?   To me this does not
seem important: we just end up with two different ways to say the same
thing (i.e., they are logically equivalent).  I think there would
really be a problem if we could do some reduction and end up with two
non-equivalent constraint sets, then I think we would have lost
confluence.  But can this happen?

 another way to interpret your message: to show equivalence of
 the two constraint sets, you need to show that one implies the
 other, or that both are equivalent to a common constraint set -
I just used this notion of equivalance, becaue it is what we usually
use in logic.  Do you think we should use something else?

 you cannot use constraints from one set to help discharging
 constraints in the other.
I don't understand this, why not?  If I want to prove that 'p iff q' I
assume 'p' to prove 'q', and vice versa.  Clearly I can use 'p' while
proving 'q'.  We must be talking about different things :-)

-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Haskell prime wiki

2006-04-13 Thread Iavor Diatchki
Hello,
The wiki page says that we should alert the committee about
inaccuracies etc of pages, so here are some comments about the page on
FDs
(http://hackage.haskell.org/trac/haskell-prime/wiki/FunctionalDependencies)

1) The example for non-termination can be simplified to:
f = \x y -  (x .*. [y]) `asTypeOf` y

2) The example for 'non-confluence' has a typo (bullet 2 should have a
'c' not a 'b',  as it is the the two are syntactically equal :-))

3) In the section on references it seems relevant to add a reference
to Simplifying and Improving Qualified Types by Mark Jones, because
it provides important background on the topic.

Hope this helps
-Iavor
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime